Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2106.03164
Cited By
On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation
6 June 2021
Ruidan He
Linlin Liu
Hai Ye
Qingyu Tan
Bosheng Ding
Liying Cheng
Jia-Wei Low
Lidong Bing
Luo Si
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation"
29 / 29 papers shown
Title
IterIS: Iterative Inference-Solving Alignment for LoRA Merging
Hongxu Chen
Runshi Li
Bowei Zhu
Zhen Wang
Long Chen
MoMe
96
0
0
21 Nov 2024
LoRA-Pro: Are Low-Rank Adapters Properly Optimized?
Zhengbo Wang
Jian Liang
Ran He
Zilei Wang
Tieniu Tan
50
15
0
25 Jul 2024
Compensate Quantization Errors+: Quantized Models Are Inquisitive Learners
Yifei Gao
Jie Ou
Lei Wang
Fanhua Shang
Jaji Wu
MQ
45
0
0
22 Jul 2024
Trans-LoRA
\textit{Trans-LoRA}
Trans-LoRA
: towards data-free Transferable Parameter Efficient Finetuning
Runqian Wang
Soumya Ghosh
David D. Cox
Diego Antognini
Aude Oliva
Rogerio Feris
Leonid Karlinsky
30
1
0
27 May 2024
When LLMs Meet Cybersecurity: A Systematic Literature Review
Jie Zhang
Haoyu Bu
Hui Wen
Yu Chen
Lun Li
Hongsong Zhu
26
36
0
06 May 2024
Let Your Graph Do the Talking: Encoding Structured Data for LLMs
Bryan Perozzi
Bahare Fatemi
Dustin Zelle
Anton Tsitsulin
Mehran Kazemi
Rami Al-Rfou
Jonathan J. Halcrow
GNN
30
55
0
08 Feb 2024
A Comprehensive Evaluation of Parameter-Efficient Fine-Tuning on Software Engineering Tasks
Wentao Zou
Qi Li
Jidong Ge
Chuanyi Li
Xiaoyu Shen
LiGuo Huang
Bin Luo
24
5
0
25 Dec 2023
PrivateLoRA For Efficient Privacy Preserving LLM
Yiming Wang
Yu Lin
Xiaodong Zeng
Guannan Zhang
32
11
0
23 Nov 2023
Efficient Domain Adaptation of Sentence Embeddings Using Adapters
Tim Schopf
Dennis Schneider
Florian Matthes
26
5
0
06 Jul 2023
Harnessing the Power of Adversarial Prompting and Large Language Models for Robust Hypothesis Generation in Astronomy
I. Ciucă
Y. Ting 丁
Sandor Kruk
K. Iyer
14
7
0
20 Jun 2023
AdapterEM: Pre-trained Language Model Adaptation for Generalized Entity Matching using Adapter-tuning
John Bosco Mugeni
S. Lynden
Toshiyuki Amagasa
Akiyoshi Matono
12
2
0
30 May 2023
On Robustness of Finetuned Transformer-based NLP Models
Pavan Kalyan Reddy Neerudu
S. Oota
Mounika Marreddy
Venkateswara Rao Kagita
Manish Gupta
18
7
0
23 May 2023
A Stability Analysis of Fine-Tuning a Pre-Trained Model
Z. Fu
Anthony Man-Cho So
Nigel Collier
23
3
0
24 Jan 2023
CHAPTER: Exploiting Convolutional Neural Network Adapters for Self-supervised Speech Models
Zih-Ching Chen
Yu-Shun Sung
Hung-yi Lee
13
16
0
01 Dec 2022
AF Adapter: Continual Pretraining for Building Chinese Biomedical Language Model
Yongyu Yan
Kui Xue
Xiaoming Shi
Qi Ye
Jingping Liu
Tong Ruan
CLL
37
1
0
21 Nov 2022
Learning Better Intent Representations for Financial Open Intent Classification
Xianzhi Li
Will Aitken
Xiao-Dan Zhu
Stephen W. Thomas
AIFin
4
8
0
25 Oct 2022
Evaluating Parameter Efficient Learning for Generation
Peng-Tao Xu
M. Patwary
Shrimai Prabhumoye
Virginia Adams
R. Prenger
Wei Ping
Nayeon Lee
M. Shoeybi
Bryan Catanzaro
MoE
18
3
0
25 Oct 2022
Efficient Few-Shot Fine-Tuning for Opinion Summarization
Arthur Bravzinskas
Ramesh Nallapati
Mohit Bansal
Markus Dreyer
17
24
0
04 May 2022
Adaptable Adapters
N. Moosavi
Quentin Delfosse
Kristian Kersting
Iryna Gurevych
42
20
0
03 May 2022
IDPG: An Instance-Dependent Prompt Generation Method
Zhuofeng Wu
Sinong Wang
Jiatao Gu
Rui Hou
Yuxiao Dong
V. Vydiswaran
Hao Ma
VLM
30
58
0
09 Apr 2022
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Ning Ding
Yujia Qin
Guang Yang
Fu Wei
Zonghan Yang
...
Jianfei Chen
Yang Liu
Jie Tang
Juan Li
Maosong Sun
13
196
0
14 Mar 2022
ELLE: Efficient Lifelong Pre-training for Emerging Data
Yujia Qin
Jiajie Zhang
Yankai Lin
Zhiyuan Liu
Peng Li
Maosong Sun
Jie Zhou
14
67
0
12 Mar 2022
Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models
Shengnan An
Yifei Li
Zeqi Lin
Qian Liu
Bei Chen
Qiang Fu
Weizhu Chen
Nanning Zheng
Jian-Guang Lou
VLM
AAML
34
39
0
07 Mar 2022
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Guanzheng Chen
Fangyu Liu
Zaiqiao Meng
Shangsong Liang
26
88
0
16 Feb 2022
Enhancing Multilingual Language Model with Massive Multilingual Knowledge Triples
Linlin Liu
Xin Li
Ruidan He
Lidong Bing
Shafiq R. Joty
Luo Si
KELM
35
18
0
22 Nov 2021
Semi-Siamese Bi-encoder Neural Ranking Model Using Lightweight Fine-Tuning
Euna Jung
Jaekeol Choi
Wonjong Rhee
17
13
0
28 Oct 2021
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
Chen Zhu
Yu Cheng
Zhe Gan
S. Sun
Tom Goldstein
Jingjing Liu
AAML
221
436
0
25 Sep 2019
Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
Cheolhyoung Lee
Kyunghyun Cho
Wanmo Kang
MoE
235
205
0
25 Sep 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,950
0
20 Apr 2018
1