ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.13535
  4. Cited By
AdaptFormer: Adapting Vision Transformers for Scalable Visual
  Recognition

AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition

26 May 2022
Shoufa Chen
Chongjian Ge
Zhan Tong
Jiangliu Wang
Yibing Song
Jue Wang
Ping Luo
ArXivPDFHTML

Papers citing "AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition"

18 / 18 papers shown
Title
Adapting In-Domain Few-Shot Segmentation to New Domains without Retraining
Adapting In-Domain Few-Shot Segmentation to New Domains without Retraining
Qi Fan
Kaiqi Liu
Nian Liu
Hisham Cholakkal
Rao Muhammad Anwer
Wenbin Li
Yang Gao
35
0
0
30 Apr 2025
BARIS: Boundary-Aware Refinement with Environmental Degradation Priors for Robust Underwater Instance Segmentation
BARIS: Boundary-Aware Refinement with Environmental Degradation Priors for Robust Underwater Instance Segmentation
Pin-Chi Pan
Soo-Chang Pei
31
0
0
28 Apr 2025
E-InMeMo: Enhanced Prompting for Visual In-Context Learning
E-InMeMo: Enhanced Prompting for Visual In-Context Learning
Jiahao Zhang
Bowen Wang
Hong Liu
Liangzhi Li
Yuta Nakashima
Hajime Nagahara
VLM
77
0
0
25 Apr 2025
Enhancing Pre-Trained Model-Based Class-Incremental Learning through Neural Collapse
Enhancing Pre-Trained Model-Based Class-Incremental Learning through Neural Collapse
Kun He
Zijian Song
Shuoxi Zhang
J. Hopcroft
CLL
49
0
0
25 Apr 2025
Learning Optimal Prompt Ensemble for Multi-source Visual Prompt Transfer
Learning Optimal Prompt Ensemble for Multi-source Visual Prompt Transfer
Enming Zhang
Liwen Cao
Yanru Wu
Zijie Zhao
Guan Wang
Yang Li
31
0
0
09 Apr 2025
Efficient Self-Supervised Learning for Earth Observation via Dynamic Dataset Curation
Efficient Self-Supervised Learning for Earth Observation via Dynamic Dataset Curation
Thomas Kerdreux
A. Tuel
Quentin Febvre
A. Mouche
Bertrand Chapron
45
0
0
09 Apr 2025
Rethinking the Bias of Foundation Model under Long-tailed Distribution
Rethinking the Bias of Foundation Model under Long-tailed Distribution
Jiahao Chen
Bin Qin
Jiangmeng Li
Hao Chen
Bing-Huang Su
33
0
0
27 Jan 2025
Dynamic Integration of Task-Specific Adapters for Class Incremental Learning
Dynamic Integration of Task-Specific Adapters for Class Incremental Learning
Jiashuo Li
Shaokun Wang
Bo Qian
Yuhang He
Xing Wei
Qiang Wang
Yihong Gong
CLL
45
1
0
23 Sep 2024
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
233
5,353
0
11 Nov 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
206
604
0
14 Oct 2021
MLP-Mixer: An all-MLP Architecture for Vision
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
216
2,132
0
04 May 2021
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
257
4,299
0
29 Apr 2021
ImageNet-21K Pretraining for the Masses
ImageNet-21K Pretraining for the Masses
T. Ridnik
Emanuel Ben-Baruch
Asaf Noy
Lihi Zelnik-Manor
SSeg
VLM
CLIP
138
542
0
22 Apr 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
254
2,999
0
18 Apr 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
  without Convolutions
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
245
2,898
0
24 Feb 2021
Is Space-Time Attention All You Need for Video Understanding?
Is Space-Time Attention All You Need for Video Understanding?
Gedas Bertasius
Heng Wang
Lorenzo Torresani
ViT
261
1,486
0
09 Feb 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
267
6,003
0
20 Apr 2018
A disciplined approach to neural network hyper-parameters: Part 1 --
  learning rate, batch size, momentum, and weight decay
A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay
L. Smith
160
943
0
26 Mar 2018
1