Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.13535
Cited By
AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition
26 May 2022
Shoufa Chen
Chongjian Ge
Zhan Tong
Jiangliu Wang
Yibing Song
Jue Wang
Ping Luo
Re-assign community
ArXiv
PDF
HTML
Papers citing
"AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition"
18 / 18 papers shown
Title
Adapting In-Domain Few-Shot Segmentation to New Domains without Retraining
Qi Fan
Kaiqi Liu
Nian Liu
Hisham Cholakkal
Rao Muhammad Anwer
Wenbin Li
Yang Gao
35
0
0
30 Apr 2025
BARIS: Boundary-Aware Refinement with Environmental Degradation Priors for Robust Underwater Instance Segmentation
Pin-Chi Pan
Soo-Chang Pei
29
0
0
28 Apr 2025
E-InMeMo: Enhanced Prompting for Visual In-Context Learning
Jiahao Zhang
Bowen Wang
Hong Liu
Liangzhi Li
Yuta Nakashima
Hajime Nagahara
VLM
77
0
0
25 Apr 2025
Enhancing Pre-Trained Model-Based Class-Incremental Learning through Neural Collapse
Kun He
Zijian Song
Shuoxi Zhang
J. Hopcroft
CLL
46
0
0
25 Apr 2025
Learning Optimal Prompt Ensemble for Multi-source Visual Prompt Transfer
Enming Zhang
Liwen Cao
Yanru Wu
Zijie Zhao
Guan Wang
Yang Li
31
0
0
09 Apr 2025
Efficient Self-Supervised Learning for Earth Observation via Dynamic Dataset Curation
Thomas Kerdreux
A. Tuel
Quentin Febvre
A. Mouche
Bertrand Chapron
41
0
0
09 Apr 2025
Rethinking the Bias of Foundation Model under Long-tailed Distribution
Jiahao Chen
Bin Qin
Jiangmeng Li
Hao Chen
Bing-Huang Su
31
0
0
27 Jan 2025
Dynamic Integration of Task-Specific Adapters for Class Incremental Learning
Jiashuo Li
Shaokun Wang
Bo Qian
Yuhang He
Xing Wei
Qiang Wang
Yihong Gong
CLL
43
1
0
23 Sep 2024
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
233
5,353
0
11 Nov 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
203
604
0
14 Oct 2021
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
209
2,132
0
04 May 2021
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
255
4,299
0
29 Apr 2021
ImageNet-21K Pretraining for the Masses
T. Ridnik
Emanuel Ben-Baruch
Asaf Noy
Lihi Zelnik-Manor
SSeg
VLM
CLIP
138
542
0
22 Apr 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
252
2,999
0
18 Apr 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
245
2,898
0
24 Feb 2021
Is Space-Time Attention All You Need for Video Understanding?
Gedas Bertasius
Heng Wang
Lorenzo Torresani
ViT
253
1,486
0
09 Feb 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
267
6,003
0
20 Apr 2018
A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay
L. Smith
158
943
0
26 Mar 2018
1