Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2208.13138
Cited By
ClusTR: Exploring Efficient Self-attention via Clustering for Vision Transformers
28 August 2022
Yutong Xie
Jianpeng Zhang
Yong-quan Xia
A. Hengel
Qi Wu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"ClusTR: Exploring Efficient Self-attention via Clustering for Vision Transformers"
5 / 5 papers shown
Title
QuadTree Attention for Vision Transformers
Shitao Tang
Jiahui Zhang
Siyu Zhu
Ping Tan
ViT
148
154
0
08 Jan 2022
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
263
3,538
0
24 Feb 2021
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
249
1,982
0
28 Jul 2020
Whole-Body Human Pose Estimation in the Wild
Sheng Jin
Lumin Xu
Jin Xu
Can Wang
Wentao Liu
Chao Qian
Wanli Ouyang
Ping Luo
3DH
130
235
0
23 Jul 2020
Efficient Content-Based Sparse Attention with Routing Transformers
Aurko Roy
M. Saffar
Ashish Vaswani
David Grangier
MoE
228
578
0
12 Mar 2020
1