ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.07577
  4. Cited By
UniPELT: A Unified Framework for Parameter-Efficient Language Model
  Tuning
v1v2v3 (latest)

UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning

14 October 2021
Yuning Mao
Lambert Mathias
Rui Hou
Amjad Almahairi
Hao Ma
Jiawei Han
Wen-tau Yih
Madian Khabsa
ArXiv (abs)PDFHTML

Papers citing "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning"

50 / 134 papers shown
Title
Advancing Parameter Efficiency in Fine-tuning via Representation Editing
Advancing Parameter Efficiency in Fine-tuning via Representation Editing
Muling Wu
Tianlong Li
Xiaohua Wang
Changze Lv
Changze Lv
Zixuan Ling
Jianhao Zhu
Cenyuan Zhang
Xiaoqing Zheng
Xuanjing Huang
79
25
0
23 Feb 2024
LoRETTA: Low-Rank Economic Tensor-Train Adaptation for
  Ultra-Low-Parameter Fine-Tuning of Large Language Models
LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models
Yifan Yang
Jiajun Zhou
Ngai Wong
Zheng Zhang
73
8
0
18 Feb 2024
QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large
  Language Model Tuning
QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning
Hossein Rajabzadeh
Mojtaba Valipour
Tianshu Zhu
Marzieh S. Tahaei
Hyock Ju Kwon
Ali Ghodsi
Boxing Chen
Mehdi Rezagholizadeh
67
10
0
16 Feb 2024
Time-, Memory- and Parameter-Efficient Visual Adaptation
Time-, Memory- and Parameter-Efficient Visual Adaptation
Otniel-Bogdan Mercea
Alexey Gritsenko
Cordelia Schmid
Anurag Arnab
VLM
82
15
0
05 Feb 2024
X-PEFT: eXtremely Parameter-Efficient Fine-Tuning for Extreme
  Multi-Profile Scenarios
X-PEFT: eXtremely Parameter-Efficient Fine-Tuning for Extreme Multi-Profile Scenarios
Namju Kwak
Taesup Kim
MoE
31
0
0
29 Jan 2024
A Comprehensive Survey of Compression Algorithms for Language Models
A Comprehensive Survey of Compression Algorithms for Language Models
Seungcheol Park
Jaehyeon Choi
Sojin Lee
U. Kang
MQ
116
16
0
27 Jan 2024
PRILoRA: Pruned and Rank-Increasing Low-Rank Adaptation
PRILoRA: Pruned and Rank-Increasing Low-Rank Adaptation
Nadav Benedek
Lior Wolf
85
5
0
20 Jan 2024
Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized
  Large Language Models
Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models
Zhengxin Zhang
Dan Zhao
Xupeng Miao
Gabriele Oliaro
Qing Li
Yong Jiang
Zhihao Jia
MQ
90
9
0
13 Jan 2024
FullLoRA: Efficiently Boosting the Robustness of Pretrained Vision Transformers
FullLoRA: Efficiently Boosting the Robustness of Pretrained Vision Transformers
Zheng Yuan
Jie Zhang
Shiguang Shan
Xilin Chen
102
4
0
03 Jan 2024
Efficient Multi-domain Text Recognition Deep Neural Network
  Parameterization with Residual Adapters
Efficient Multi-domain Text Recognition Deep Neural Network Parameterization with Residual Adapters
Jiayou Chao
Wei Zhu
66
0
0
01 Jan 2024
Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models:
  A Critical Review and Assessment
Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models: A Critical Review and Assessment
Lingling Xu
Haoran Xie
S. J. Qin
Xiaohui Tao
F. Wang
118
163
0
19 Dec 2023
Traffic Signal Control Using Lightweight Transformers: An
  Offline-to-Online RL Approach
Traffic Signal Control Using Lightweight Transformers: An Offline-to-Online RL Approach
Xingshuai Huang
Di Wu
Benoit Boulet
OffRL
52
3
0
12 Dec 2023
ControlNet-XS: Designing an Efficient and Effective Architecture for
  Controlling Text-to-Image Diffusion Models
ControlNet-XS: Designing an Efficient and Effective Architecture for Controlling Text-to-Image Diffusion Models
Denis Zavadski
Johann-Friedrich Feiden
Carsten Rother
DiffM
81
10
0
11 Dec 2023
Batched Low-Rank Adaptation of Foundation Models
Batched Low-Rank Adaptation of Foundation Models
Yeming Wen
Swarat Chaudhuri
OffRL
98
21
0
09 Dec 2023
MultiLoRA: Democratizing LoRA for Better Multi-Task Learning
MultiLoRA: Democratizing LoRA for Better Multi-Task Learning
Yiming Wang
Yu Lin
Xiaodong Zeng
Guannan Zhang
MoMe
137
21
0
20 Nov 2023
Adapters: A Unified Library for Parameter-Efficient and Modular Transfer
  Learning
Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning
Clifton A. Poth
Hannah Sterz
Indraneil Paul
Sukannya Purkayastha
Leon Arne Engländer
Timo Imhof
Ivan Vulić
Sebastian Ruder
Iryna Gurevych
Jonas Pfeiffer
95
53
0
18 Nov 2023
SiRA: Sparse Mixture of Low Rank Adaptation
SiRA: Sparse Mixture of Low Rank Adaptation
Yun Zhu
Nevan Wichers
Chu-Cheng Lin
Xinyi Wang
Tianlong Chen
...
Han Lu
Canoee Liu
Liangchen Luo
Jindong Chen
Lei Meng
MoE
77
28
0
15 Nov 2023
PEMA: An Offsite-Tunable Plug-in External Memory Adaptation for Language
  Models
PEMA: An Offsite-Tunable Plug-in External Memory Adaptation for Language Models
HyunJin Kim
Young Jin Kim
Jinyeong Bak
KELM
79
1
0
14 Nov 2023
Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Weiyang Liu
Zeju Qiu
Yao Feng
Yuliang Xiu
Yuxuan Xue
...
Songyou Peng
Yandong Wen
Michael J. Black
Adrian Weller
Bernhard Schölkopf
104
72
0
10 Nov 2023
Language Models are Super Mario: Absorbing Abilities from Homologous
  Models as a Free Lunch
Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Le Yu
Yu Bowen
Haiyang Yu
Fei Huang
Yongbin Li
MoMe
118
336
0
06 Nov 2023
Mixture-of-Linguistic-Experts Adapters for Improving and Interpreting
  Pre-trained Language Models
Mixture-of-Linguistic-Experts Adapters for Improving and Interpreting Pre-trained Language Models
Raymond Li
Gabriel Murray
Giuseppe Carenini
MoE
81
2
0
24 Oct 2023
Improving generalization in large language models by learning prefix
  subspaces
Improving generalization in large language models by learning prefix subspaces
Louis Falissard
Vincent Guigue
Laure Soulier
45
1
0
24 Oct 2023
Pre-Trained Language Models Augmented with Synthetic Scanpaths for
  Natural Language Understanding
Pre-Trained Language Models Augmented with Synthetic Scanpaths for Natural Language Understanding
Shuwen Deng
Paul Prasse
D. R. Reich
Tobias Scheffer
Lena A. Jäger
96
7
0
23 Oct 2023
RSAdapter: Adapting Multimodal Models for Remote Sensing Visual Question
  Answering
RSAdapter: Adapting Multimodal Models for Remote Sensing Visual Question Answering
Yuduo Wang
Pedram Ghamisi
61
6
0
19 Oct 2023
Uncertainty-aware Parameter-Efficient Self-training for Semi-supervised
  Language Understanding
Uncertainty-aware Parameter-Efficient Self-training for Semi-supervised Language Understanding
Jianing Wang
Qiushi Sun
Nuo Chen
Chengyu Wang
Jun Huang
Ming Gao
Xiang Li
UQLM
66
4
0
19 Oct 2023
Non-Intrusive Adaptation: Input-Centric Parameter-efficient Fine-Tuning
  for Versatile Multimodal Modeling
Non-Intrusive Adaptation: Input-Centric Parameter-efficient Fine-Tuning for Versatile Multimodal Modeling
Yaqing Wang
Jialin Wu
T. Dabral
Jiageng Zhang
Geoff Brown
...
Frederick Liu
Yi Liang
Bo Pang
Michael Bendersky
Radu Soricut
VLM
81
15
0
18 Oct 2023
Decomposed Prompt Tuning via Low-Rank Reparameterization
Decomposed Prompt Tuning via Low-Rank Reparameterization
Yao Xiao
Lu Xu
Jiaxi Li
Wei Lu
Xiaoli Li
VLM
73
7
0
16 Oct 2023
TAIL: Task-specific Adapters for Imitation Learning with Large
  Pretrained Models
TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models
Zuxin Liu
Jesse Zhang
Kavosh Asadi
Yao Liu
Ding Zhao
Shoham Sabach
Rasool Fakoor
ALMAI4CE
111
30
0
09 Oct 2023
Hierarchical Side-Tuning for Vision Transformers
Hierarchical Side-Tuning for Vision Transformers
Weifeng Lin
Ziheng Wu
Wentao Yang
Mingxin Huang
Jun Huang
Lianwen Jin
109
8
0
09 Oct 2023
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by
  Learning to Scale
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale
Markus Frohmann
Carolin Holtermann
Shahed Masoudian
Anne Lauscher
Navid Rekabsaz
85
2
0
02 Oct 2023
Scaled Prompt-Tuning for Few-Shot Natural Language Generation
Scaled Prompt-Tuning for Few-Shot Natural Language Generation
Ting Hu
Christoph Meinel
Haojin Yang
27
0
0
13 Sep 2023
Exploring the Benefits of Differentially Private Pre-training and
  Parameter-Efficient Fine-tuning for Table Transformers
Exploring the Benefits of Differentially Private Pre-training and Parameter-Efficient Fine-tuning for Table Transformers
Xilong Wang
Chia-Mu Yu
Pin-Yu Chen
36
0
0
12 Sep 2023
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning
Zhengxiang Shi
Aldo Lipani
VLM
124
34
0
11 Sep 2023
Parameter and Computation Efficient Transfer Learning for
  Vision-Language Pre-trained Models
Parameter and Computation Efficient Transfer Learning for Vision-Language Pre-trained Models
Qiong Wu
Wei Yu
Yiyi Zhou
Shubin Huang
Xiaoshuai Sun
Rongrong Ji
VLM
86
7
0
04 Sep 2023
IncreLoRA: Incremental Parameter Allocation Method for
  Parameter-Efficient Fine-tuning
IncreLoRA: Incremental Parameter Allocation Method for Parameter-Efficient Fine-tuning
Feiyu F. Zhang
Liangzhi Li
Jun-Cheng Chen
Zhouqian Jiang
Bowen Wang
Yiming Qian
93
37
0
23 Aug 2023
VLN-PETL: Parameter-Efficient Transfer Learning for Vision-and-Language
  Navigation
VLN-PETL: Parameter-Efficient Transfer Learning for Vision-and-Language Navigation
Yanyuan Qiao
Zheng Yu
Qi Wu
VLM
74
19
0
20 Aug 2023
VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity
  Control
VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control
Zi-Yuan Hu
Yanyang Li
Michael R. Lyu
Liwei Wang
VLM
90
16
0
18 Aug 2023
Pluggable Neural Machine Translation Models via Memory-augmented
  Adapters
Pluggable Neural Machine Translation Models via Memory-augmented Adapters
Yuzhuang Xu
Shuo Wang
Peng Li
Xuebo Liu
Xiaolong Wang
Weidong Liu
Yang Liu
102
1
0
12 Jul 2023
Approximated Prompt Tuning for Vision-Language Pre-trained Models
Approximated Prompt Tuning for Vision-Language Pre-trained Models
Qiong Wu
Shubin Huang
Yiyi Zhou
Pingyang Dai
Annan Shu
Guannan Jiang
Rongrong Ji
VLMVPVLM
42
2
0
27 Jun 2023
Learning to Modulate pre-trained Models in RL
Learning to Modulate pre-trained Models in RL
Thomas Schmied
M. Hofmarcher
Fabian Paischer
Razvan Pascanu
Sepp Hochreiter
CLLOffRL
105
18
0
26 Jun 2023
RS5M and GeoRSCLIP: A Large Scale Vision-Language Dataset and A Large
  Vision-Language Model for Remote Sensing
RS5M and GeoRSCLIP: A Large Scale Vision-Language Dataset and A Large Vision-Language Model for Remote Sensing
Zilun Zhang
Tiancheng Zhao
Yulong Guo
Yuxiang Cai
DiffMVLM
146
67
0
20 Jun 2023
Make Pre-trained Model Reversible: From Parameter to Memory Efficient
  Fine-Tuning
Make Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning
Baohao Liao
Shaomu Tan
Christof Monz
KELM
105
30
0
01 Jun 2023
Jointly Reparametrized Multi-Layer Adaptation for Efficient and Private
  Tuning
Jointly Reparametrized Multi-Layer Adaptation for Efficient and Private Tuning
Umang Gupta
Aram Galstyan
Greg Ver Steeg
53
2
0
30 May 2023
Domain Specialization as the Key to Make Large Language Models
  Disruptive: A Comprehensive Survey
Domain Specialization as the Key to Make Large Language Models Disruptive: A Comprehensive Survey
Chen Ling
Xujiang Zhao
Jiaying Lu
Chengyuan Deng
Can Zheng
...
Chris White
Quanquan Gu
Jian Pei
Carl Yang
Liang Zhao
ALM
169
139
0
30 May 2023
One Network, Many Masks: Towards More Parameter-Efficient Transfer
  Learning
One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning
Guangtao Zeng
Peiyuan Zhang
Wei Lu
95
22
0
28 May 2023
Neural Architecture Search for Parameter-Efficient Fine-tuning of Large
  Pre-trained Language Models
Neural Architecture Search for Parameter-Efficient Fine-tuning of Large Pre-trained Language Models
Neal Lawton
Anoop Kumar
Govind Thattai
Aram Galstyan
Greg Ver Steeg
47
19
0
26 May 2023
Parameter-Efficient Language Model Tuning with Active Learning in
  Low-Resource Settings
Parameter-Efficient Language Model Tuning with Active Learning in Low-Resource Settings
Josip Jukić
Jan vSnajder
84
4
0
23 May 2023
Memory-Efficient Fine-Tuning of Compressed Large Language Models via
  sub-4-bit Integer Quantization
Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization
Jeonghoon Kim
J. H. Lee
Sungdong Kim
Joonsuk Park
Kang Min Yoo
S. Kwon
Dongsoo Lee
MQ
157
105
0
23 May 2023
G-Adapter: Towards Structure-Aware Parameter-Efficient Transfer Learning
  for Graph Transformer Networks
G-Adapter: Towards Structure-Aware Parameter-Efficient Transfer Learning for Graph Transformer Networks
Anchun Gui
Jinqiang Ye
Han Xiao
80
22
0
17 May 2023
Visual Tuning
Visual Tuning
Bruce X. B. Yu
Jianlong Chang
Haixin Wang
Lin Liu
Shijie Wang
...
Lingxi Xie
Haojie Li
Zhouchen Lin
Qi Tian
Chang Wen Chen
VLM
171
41
0
10 May 2023
Previous
123
Next