ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.06964
  4. Cited By
Parameter-Efficient and Memory-Efficient Tuning for Vision Transformer:
  A Disentangled Approach

Parameter-Efficient and Memory-Efficient Tuning for Vision Transformer: A Disentangled Approach

9 July 2024
Taolin Zhang
Jiawang Bai
Zhihe Lu
Dongze Lian
Genping Wang
Xinchao Wang
Shu-Tao Xia
ArXivPDFHTML

Papers citing "Parameter-Efficient and Memory-Efficient Tuning for Vision Transformer: A Disentangled Approach"

11 / 11 papers shown
Title
Faster Parameter-Efficient Tuning with Token Redundancy Reduction
Faster Parameter-Efficient Tuning with Token Redundancy Reduction
Kwonyoung Kim
Jungin Park
Jin-Hwa Kim
Hyeongjun Kwon
Kwanghoon Sohn
65
0
0
26 Mar 2025
LCM: Locally Constrained Compact Point Cloud Model for Masked Point
  Modeling
LCM: Locally Constrained Compact Point Cloud Model for Masked Point Modeling
Yaohua Zha
Naiqi Li
Yanzi Wang
Tao Dai
Hang Guo
Bin Chen
Zhi Wang
Zhihao Ouyang
Shu-Tao Xia
Mamba
42
8
0
27 May 2024
Sparse-Tuning: Adapting Vision Transformers with Efficient Fine-tuning
  and Inference
Sparse-Tuning: Adapting Vision Transformers with Efficient Fine-tuning and Inference
Ting Liu
Xuyang Liu
Liangtao Shi
Zunnan Xu
Siteng Huang
Yi Xin
Quanjun Yin
41
5
0
23 May 2024
Not All Prompts Are Secure: A Switchable Backdoor Attack Against
  Pre-trained Vision Transformers
Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transformers
Shengyuan Yang
Jiawang Bai
Kuofeng Gao
Yong-Liang Yang
Yiming Li
Shu-Tao Xia
AAML
SILM
30
5
0
17 May 2024
MambaIR: A Simple Baseline for Image Restoration with State-Space Model
MambaIR: A Simple Baseline for Image Restoration with State-Space Model
Hang Guo
Jinmin Li
Tao Dai
Zhihao Ouyang
Xudong Ren
Shu-Tao Xia
Mamba
50
199
0
23 Feb 2024
AdaptFormer: Adapting Vision Transformers for Scalable Visual
  Recognition
AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition
Shoufa Chen
Chongjian Ge
Zhan Tong
Jiangliu Wang
Yibing Song
Jue Wang
Ping Luo
141
637
0
26 May 2022
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
258
7,412
0
11 Nov 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
236
804
0
14 Oct 2021
Learning to Prompt for Vision-Language Models
Learning to Prompt for Vision-Language Models
Kaiyang Zhou
Jingkang Yang
Chen Change Loy
Ziwei Liu
VPVLM
CLIP
VLM
322
2,261
0
02 Sep 2021
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
303
5,761
0
29 Apr 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,843
0
18 Apr 2021
1