ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.06446
  4. Cited By
ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient
  Vision Transformer

ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer

10 June 2023
Haoran You
Huihong Shi
Yipin Guo
Yingyan Lin
Lin
ArXivPDFHTML

Papers citing "ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer"

9 / 9 papers shown
Title
Channel-wise Parallelizable Spiking Neuron with Multiplication-free Dynamics and Large Temporal Receptive Fields
Channel-wise Parallelizable Spiking Neuron with Multiplication-free Dynamics and Large Temporal Receptive Fields
Peng Xue
Wei Fang
Zhengyu Ma
Zihan Huang
Zhaokun Zhou
Yonghong Tian
T. Masquelier
Huihui Zhou
48
0
0
24 Jan 2025
NASH: Neural Architecture and Accelerator Search for
  Multiplication-Reduced Hybrid Models
NASH: Neural Architecture and Accelerator Search for Multiplication-Reduced Hybrid Models
Yang Xu
Huihong Shi
Zhongfeng Wang
22
0
0
07 Sep 2024
EViT: An Eagle Vision Transformer with Bi-Fovea Self-Attention
EViT: An Eagle Vision Transformer with Bi-Fovea Self-Attention
Yulong Shi
Mingwei Sun
Yongshuai Wang
Hui Sun
Zengqiang Chen
24
3
0
10 Oct 2023
Hydra Attention: Efficient Attention with Many Heads
Hydra Attention: Efficient Attention with Many Heads
Daniel Bolya
Cheng-Yang Fu
Xiaoliang Dai
Peizhao Zhang
Judy Hoffman
91
75
0
15 Sep 2022
BiT: Robustly Binarized Multi-distilled Transformer
BiT: Robustly Binarized Multi-distilled Transformer
Zechun Liu
Barlas Oğuz
Aasish Pappu
Lin Xiao
Scott Yih
Meng Li
Raghuraman Krishnamoorthi
Yashar Mehdad
MQ
24
50
0
25 May 2022
Energon: Towards Efficient Acceleration of Transformers Using Dynamic
  Sparse Attention
Energon: Towards Efficient Acceleration of Transformers Using Dynamic Sparse Attention
Zhe Zhou
Junling Liu
Zhenyu Gu
Guangyu Sun
54
39
0
18 Oct 2021
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision
  Transformer
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer
Sachin Mehta
Mohammad Rastegari
ViT
178
1,148
0
05 Oct 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
  without Convolutions
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
263
3,538
0
24 Feb 2021
ShiftAddNet: A Hardware-Inspired Deep Network
ShiftAddNet: A Hardware-Inspired Deep Network
Haoran You
Xiaohan Chen
Yongan Zhang
Chaojian Li
Sicheng Li
Zihao Liu
Zhangyang Wang
Yingyan Lin
OOD
MQ
47
75
0
24 Oct 2020
1