Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.15034
Cited By
SVFormer: A Direct Training Spiking Transformer for Efficient Video Action Recognition
21 June 2024
Liutao Yu
Liwei Huang
Chenlin Zhou
Han Zhang
Zhengyu Ma
Huihui Zhou
Yonghong Tian
ViT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SVFormer: A Direct Training Spiking Transformer for Efficient Video Action Recognition"
6 / 6 papers shown
Title
Optimal ANN-SNN Conversion for High-accuracy and Ultra-low-latency Spiking Neural Networks
Tong Bu
Wei Fang
Jianhao Ding
Penglin Dai
Zhaofei Yu
Tiejun Huang
100
191
0
08 Mar 2023
Spikformer: When Spiking Neural Network Meets Transformer
Zhaokun Zhou
Yuesheng Zhu
Chao He
Yaowei Wang
Shuicheng Yan
Yonghong Tian
Liuliang Yuan
140
231
0
29 Sep 2022
UniFormer: Unifying Convolution and Self-attention for Visual Recognition
Kunchang Li
Yali Wang
Junhao Zhang
Peng Gao
Guanglu Song
Yu Liu
Hongsheng Li
Yu Qiao
ViT
125
223
0
24 Jan 2022
Is Space-Time Attention All You Need for Video Understanding?
Gedas Bertasius
Heng Wang
Lorenzo Torresani
ViT
272
1,939
0
09 Feb 2021
Deep Residual Learning in Spiking Neural Networks
Wei Fang
Zhaofei Yu
Yanqing Chen
Tiejun Huang
T. Masquelier
Yonghong Tian
119
470
0
08 Feb 2021
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
948
20,214
0
17 Apr 2017
1