Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2306.06446
Cited By
ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer
10 June 2023
Haoran You
Huihong Shi
Yipin Guo
Yingyan Lin
Lin
Re-assign community
ArXiv
PDF
HTML
Papers citing
"ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer"
9 / 9 papers shown
Title
Channel-wise Parallelizable Spiking Neuron with Multiplication-free Dynamics and Large Temporal Receptive Fields
Peng Xue
Wei Fang
Zhengyu Ma
Zihan Huang
Zhaokun Zhou
Yonghong Tian
T. Masquelier
Huihui Zhou
48
0
0
24 Jan 2025
NASH: Neural Architecture and Accelerator Search for Multiplication-Reduced Hybrid Models
Yang Xu
Huihong Shi
Zhongfeng Wang
22
0
0
07 Sep 2024
EViT: An Eagle Vision Transformer with Bi-Fovea Self-Attention
Yulong Shi
Mingwei Sun
Yongshuai Wang
Hui Sun
Zengqiang Chen
24
3
0
10 Oct 2023
Hydra Attention: Efficient Attention with Many Heads
Daniel Bolya
Cheng-Yang Fu
Xiaoliang Dai
Peizhao Zhang
Judy Hoffman
91
75
0
15 Sep 2022
BiT: Robustly Binarized Multi-distilled Transformer
Zechun Liu
Barlas Oğuz
Aasish Pappu
Lin Xiao
Scott Yih
Meng Li
Raghuraman Krishnamoorthi
Yashar Mehdad
MQ
24
50
0
25 May 2022
Energon: Towards Efficient Acceleration of Transformers Using Dynamic Sparse Attention
Zhe Zhou
Junling Liu
Zhenyu Gu
Guangyu Sun
54
39
0
18 Oct 2021
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer
Sachin Mehta
Mohammad Rastegari
ViT
178
1,148
0
05 Oct 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
263
3,538
0
24 Feb 2021
ShiftAddNet: A Hardware-Inspired Deep Network
Haoran You
Xiaohan Chen
Yongan Zhang
Chaojian Li
Sicheng Li
Zihao Liu
Zhangyang Wang
Yingyan Lin
OOD
MQ
47
75
0
24 Oct 2020
1