Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2408.17062
Cited By
Vote&Mix: Plug-and-Play Token Reduction for Efficient Vision Transformer
30 August 2024
Shuai Peng
Di Fu
Baole Wei
Yong Cao
Liangcai Gao
Zhi Tang
ViT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Vote&Mix: Plug-and-Play Token Reduction for Efficient Vision Transformer"
3 / 3 papers shown
Title
Hydra Attention: Efficient Attention with Many Heads
Daniel Bolya
Cheng-Yang Fu
Xiaoliang Dai
Peizhao Zhang
Judy Hoffman
99
75
0
15 Sep 2022
GroupViT: Semantic Segmentation Emerges from Text Supervision
Jiarui Xu
Shalini De Mello
Sifei Liu
Wonmin Byeon
Thomas Breuel
Jan Kautz
X. Wang
ViT
VLM
177
494
0
22 Feb 2022
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
263
3,538
0
24 Feb 2021
1