Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2309.06891
Cited By
Keep It SimPool: Who Said Supervised Transformers Suffer from Attention Deficit?
13 September 2023
Bill Psomas
Ioannis Kakogeorgiou
Konstantinos Karantzalos
Yannis Avrithis
ViT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Keep It SimPool: Who Said Supervised Transformers Suffer from Attention Deficit?"
6 / 6 papers shown
Title
Vision Transformers Need Registers
Zilong Chen
Maxime Oquab
Julien Mairal
Huaping Liu
ViT
35
311
0
28 Sep 2023
Localizing Objects with Self-Supervised Transformers and no Labels
Oriane Siméoni
Gilles Puy
Huy V. Vo
Simon Roburin
Spyros Gidaris
Andrei Bursuc
P. Pérez
Renaud Marlet
Jean Ponce
ViT
170
195
0
29 Sep 2021
Intriguing Properties of Vision Transformers
Muzammal Naseer
Kanchana Ranasinghe
Salman Khan
Munawar Hayat
F. Khan
Ming-Hsuan Yang
ViT
251
619
0
21 May 2021
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
298
5,761
0
29 Apr 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
263
3,604
0
24 Feb 2021
Learning and aggregating deep local descriptors for instance-level recognition
Giorgos Tolias
Tomás Jenícek
Ondvrej Chum
FedML
170
100
0
26 Jul 2020
1