Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2303.08685
Cited By
Making Vision Transformers Efficient from A Token Sparsification View
15 March 2023
Shuning Chang
Pichao Wang
Ming Lin
Fan Wang
David Junhao Zhang
Rong Jin
Mike Zheng Shou
ViT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Making Vision Transformers Efficient from A Token Sparsification View"
7 / 7 papers shown
Title
Attention-aware Semantic Communications for Collaborative Inference
Jiwoong Im
Nayoung Kwon
Taewoo Park
Jiheon Woo
Jaeho Lee
Yongjune Kim
18
2
0
23 Feb 2024
Morphing Tokens Draw Strong Masked Image Models
Taekyung Kim
Byeongho Heo
Dongyoon Han
24
3
0
30 Dec 2023
Visual Parser: Representing Part-whole Hierarchies with Transformers
Shuyang Sun
Xiaoyu Yue
S. Bai
Philip H. S. Torr
44
26
0
13 Jul 2021
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
283
5,723
0
29 Apr 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
263
3,538
0
24 Feb 2021
Is Space-Time Attention All You Need for Video Understanding?
Gedas Bertasius
Heng Wang
Lorenzo Torresani
ViT
272
1,939
0
09 Feb 2021
Graph-Based Global Reasoning Networks
Yunpeng Chen
Marcus Rohrbach
Zhicheng Yan
Shuicheng Yan
Jiashi Feng
Yannis Kalantidis
GNN
NAI
244
453
0
30 Nov 2018
1