ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.07962
  4. Cited By
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?

Revisiting Parameter-Efficient Tuning: Are We Really There Yet?

16 February 2022
Guanzheng Chen
Fangyu Liu
Zaiqiao Meng
Shangsong Liang
ArXivPDFHTML

Papers citing "Revisiting Parameter-Efficient Tuning: Are We Really There Yet?"

18 / 18 papers shown
Title
Understanding Layer Significance in LLM Alignment
Understanding Layer Significance in LLM Alignment
Guangyuan Shi
Zexin Lu
Xiaoyu Dong
Wenlong Zhang
Xuanyu Zhang
Yujie Feng
Xiao-Ming Wu
48
2
0
23 Oct 2024
DARE the Extreme: Revisiting Delta-Parameter Pruning For Fine-Tuned Models
DARE the Extreme: Revisiting Delta-Parameter Pruning For Fine-Tuned Models
Wenlong Deng
Yize Zhao
V. Vakilian
Minghui Chen
Xiaoxiao Li
Christos Thrampoulidis
37
3
0
12 Oct 2024
A Federated Learning-Friendly Approach for Parameter-Efficient
  Fine-Tuning of SAM in 3D Segmentation
A Federated Learning-Friendly Approach for Parameter-Efficient Fine-Tuning of SAM in 3D Segmentation
Mothilal Asokan
Difei Gao
Joya Chen
Mike Zheng Shou
FedML
MedIm
39
1
0
31 Jul 2024
Personalized LLM Response Generation with Parameterized Memory Injection
Personalized LLM Response Generation with Parameterized Memory Injection
Kai Zhang
Lizhi Qing
Yangyang Kang
29
11
0
04 Apr 2024
A Comprehensive Evaluation of Parameter-Efficient Fine-Tuning on
  Software Engineering Tasks
A Comprehensive Evaluation of Parameter-Efficient Fine-Tuning on Software Engineering Tasks
Wentao Zou
Qi Li
Jidong Ge
Chuanyi Li
Xiaoyu Shen
LiGuo Huang
Bin Luo
24
5
0
25 Dec 2023
Federated Full-Parameter Tuning of Billion-Sized Language Models with
  Communication Cost under 18 Kilobytes
Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 Kilobytes
Zhen Qin
Daoyuan Chen
Bingchen Qian
Bolin Ding
Yaliang Li
Shuiguang Deng
FedML
32
32
0
11 Dec 2023
SiRA: Sparse Mixture of Low Rank Adaptation
SiRA: Sparse Mixture of Low Rank Adaptation
Yun Zhu
Nevan Wichers
Chu-Cheng Lin
Xinyi Wang
Tianlong Chen
...
Han Lu
Canoee Liu
Liangchen Luo
Jindong Chen
Lei Meng
MoE
19
27
0
15 Nov 2023
A Language Model of Java Methods with Train/Test Deduplication
A Language Model of Java Methods with Train/Test Deduplication
Chia-Yi Su
Aakash Bansal
Vijayanta Jain
S. Ghanavati
Collin McMillan
SyDa
VLM
21
9
0
15 May 2023
A Comprehensive Analysis of Adapter Efficiency
A Comprehensive Analysis of Adapter Efficiency
Nandini Mundra
Sumanth Doddapaneni
Raj Dabre
Anoop Kunchukuttan
Ratish Puduppully
Mitesh M. Khapra
18
10
0
12 May 2023
When Federated Learning Meets Pre-trained Language Models'
  Parameter-Efficient Tuning Methods
When Federated Learning Meets Pre-trained Language Models' Parameter-Efficient Tuning Methods
Zhuo Zhang
Yuanhang Yang
Yong Dai
Lizhen Qu
Zenglin Xu
FedML
16
64
0
20 Dec 2022
ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning
ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning
Shachar Don-Yehiya
Elad Venezian
Colin Raffel
Noam Slonim
Yoav Katz
Leshem Choshen
MoMe
24
52
0
02 Dec 2022
When does Parameter-Efficient Transfer Learning Work for Machine
  Translation?
When does Parameter-Efficient Transfer Learning Work for Machine Translation?
A. Ustun
Asa Cooper Stickland
27
7
0
23 May 2022
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Matthew Cer
VLM
LRM
137
277
0
15 Oct 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
236
804
0
14 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,843
0
18 Apr 2021
Mixout: Effective Regularization to Finetune Large-scale Pretrained
  Language Models
Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
Cheolhyoung Lee
Kyunghyun Cho
Wanmo Kang
MoE
235
205
0
25 Sep 2019
Model Evaluation, Model Selection, and Algorithm Selection in Machine
  Learning
Model Evaluation, Model Selection, and Algorithm Selection in Machine Learning
S. Raschka
75
764
0
13 Nov 2018
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,943
0
20 Apr 2018
1