Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2005.00247
Cited By
AdapterFusion: Non-Destructive Task Composition for Transfer Learning
1 May 2020
Jonas Pfeiffer
Aishwarya Kamath
Andreas Rucklé
Kyunghyun Cho
Iryna Gurevych
CLL
MoMe
Re-assign community
ArXiv
PDF
HTML
Papers citing
"AdapterFusion: Non-Destructive Task Composition for Transfer Learning"
50 / 133 papers shown
Title
HSplitLoRA: A Heterogeneous Split Parameter-Efficient Fine-Tuning Framework for Large Language Models
Zheng Lin
Yuxin Zhang
Zhe Chen
Zihan Fang
Xianhao Chen
Praneeth Vepakomma
Wei Ni
Jun-Jie Luo
Yue Gao
MoE
34
2
0
05 May 2025
TT-LoRA MoE: Unifying Parameter-Efficient Fine-Tuning and Sparse Mixture-of-Experts
Pradip Kunwar
Minh Vu
Maanak Gupta
Mahmoud Abdelsalam
Manish Bhattarai
MoE
MoMe
94
0
0
29 Apr 2025
E-InMeMo: Enhanced Prompting for Visual In-Context Learning
Jiahao Zhang
Bowen Wang
Hong Liu
Liangzhi Li
Yuta Nakashima
Hajime Nagahara
VLM
99
0
0
25 Apr 2025
Efficient Knowledge Transfer in Multi-Task Learning through Task-Adaptive Low-Rank Representation
Xiao Zhang
Kangsheng Wang
Tianyu Hu
Huimin Ma
54
3
0
20 Apr 2025
Large (Vision) Language Models are Unsupervised In-Context Learners
Artyom Gadetsky
Andrei Atanov
Yulun Jiang
Zhitong Gao
Ghazal Hosseini Mighan
Amir Zamir
Maria Brbić
VLM
MLLM
LRM
67
0
0
03 Apr 2025
Parameter-Efficient Fine-Tuning of Large Language Models via Deconvolution in Subspace
Jia-Chen Zhang
Yu-Jie Xiong
Chun-Ming Xia
Dong-Hai Zhu
Xi-He Qiu
64
1
0
03 Mar 2025
Make LoRA Great Again: Boosting LoRA with Adaptive Singular Values and Mixture-of-Experts Optimization Alignment
Chenghao Fan
Zhenyi Lu
Sichen Liu
Xiaoye Qu
Wei Wei
Chengfeng Gu
Yu-Xi Cheng
MoE
103
0
0
24 Feb 2025
Vision-Language Models for Edge Networks: A Comprehensive Survey
Ahmed Sharshar
Latif U. Khan
Waseem Ullah
Mohsen Guizani
VLM
62
3
0
11 Feb 2025
SSH: Sparse Spectrum Adaptation via Discrete Hartley Transformation
Yixian Shen
Qi Bi
Jia-Hong Huang
Hongyi Zhu
Andy D. Pimentel
Anuj Pathania
46
0
0
08 Feb 2025
Memory-Efficient Fine-Tuning of Transformers via Token Selection
Antoine Simoulin
Namyong Park
Xiaoyi Liu
Grey Yang
110
0
0
31 Jan 2025
BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models
Yibin Wang
H. Shi
Ligong Han
Dimitris N. Metaxas
Hao Wang
BDL
UQLM
104
6
0
28 Jan 2025
KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language Models
Fan Wang
Juyong Jiang
Chansung Park
Sunghun Kim
Jing Tang
91
1
0
08 Dec 2024
Initialization using Update Approximation is a Silver Bullet for Extremely Efficient Low-Rank Fine-Tuning
Kaustubh Ponkshe
Raghav Singhal
Eduard A. Gorbunov
Alexey Tumanov
Samuel Horváth
Praneeth Vepakomma
66
1
0
29 Nov 2024
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
L. Wang
Sheng Chen
Linnan Jiang
Shu Pan
Runze Cai
Sen Yang
Fei Yang
44
3
0
24 Oct 2024
Decoding Time Series with LLMs: A Multi-Agent Framework for Cross-Domain Annotation
M. Lin
Z. Chen
Yanchi Liu
Xujiang Zhao
Zongyu Wu
Junxiang Wang
Xiang Zhang
Suhang Wang
Haifeng Chen
AI4TS
28
7
0
22 Oct 2024
MTL-LoRA: Low-Rank Adaptation for Multi-Task Learning
Yaming Yang
Dilxat Muhtar
Yelong Shen
Yuefeng Zhan
Jianfeng Liu
...
Denvy Deng
Feng Sun
Qi Zhang
Weizhu Chen
Yunhai Tong
MoE
MoMe
38
2
0
12 Oct 2024
DA-Ada: Learning Domain-Aware Adapter for Domain Adaptive Object Detection
H. Li
Rui Zhang
Hantao Yao
X. Zhang
Yifan Hao
Xinkai Song
Xiaqing Li
Yongwei Zhao
Ling Li
Yunji Chen
ObjD
VLM
29
3
0
11 Oct 2024
SLIM: Let LLM Learn More and Forget Less with Soft LoRA and Identity Mixture
Jiayi Han
Liang Du
Hongwei Du
Xiangguo Zhou
Yiwen Wu
Weibo Zheng
Donghong Han
CLL
MoMe
MoE
38
2
0
10 Oct 2024
Detecting Bias and Enhancing Diagnostic Accuracy in Large Language Models for Healthcare
Pardis Sadat Zahraei
Zahra Shakeri
LM&MA
21
0
0
09 Oct 2024
LoRTA: Low Rank Tensor Adaptation of Large Language Models
Ignacio Hounie
Charilaos I. Kanatsoulis
Arnuv Tandon
Alejandro Ribeiro
31
0
0
05 Oct 2024
Circuit Compositions: Exploring Modular Structures in Transformer-Based Language Models
Philipp Mondorf
Sondre Wold
Barbara Plank
29
0
0
02 Oct 2024
CROME: Cross-Modal Adapters for Efficient Multimodal LLM
Sayna Ebrahimi
Sercan Ö. Arik
Tejas Nama
Tomas Pfister
37
1
0
13 Aug 2024
Exploiting the Semantic Knowledge of Pre-trained Text-Encoders for Continual Learning
Lu Yu
Hesong Li
Ying Fu
J. Weijer
Changsheng Xu
CLL
47
1
0
02 Aug 2024
Low-Rank Interconnected Adaptation Across Layers
Yibo Zhong
Yao Zhou
OffRL
MoE
38
1
0
13 Jul 2024
MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning
Hanqing Wang
Zeguan Xiao
Shuo Wang
Guanhua Chen
Guanhua Chen
30
19
0
13 Jun 2024
CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuning
Yibo Yang
Xiaojie Li
Zhongzhu Zhou
S. Song
Jianlong Wu
Liqiang Nie
Bernard Ghanem
45
6
0
07 Jun 2024
Hypernetworks for Personalizing ASR to Atypical Speech
Max Müller-Eberstein
Dianna Yee
Karren D. Yang
G. Mantena
Colin S. Lea
33
0
0
06 Jun 2024
Low-Rank Adaption on Transformer-based Oriented Object Detector for Satellite Onboard Processing of Remote Sensing Images
Xinyang Pu
Feng Xu
32
3
0
04 Jun 2024
TS-Align: A Teacher-Student Collaborative Framework for Scalable Iterative Finetuning of Large Language Models
Chen Zhang
Chengguang Tang
Dading Chong
Ke Shi
Guohua Tang
Feng Jiang
Haizhou Li
27
4
0
30 May 2024
Towards Modular LLMs by Building and Reusing a Library of LoRAs
O. Ostapenko
Zhan Su
E. Ponti
Laurent Charlin
Nicolas Le Roux
Matheus Pereira
Lucas Page-Caccia
Alessandro Sordoni
MoMe
32
30
0
18 May 2024
DP-DyLoRA: Fine-Tuning Transformer-Based Models On-Device under Differentially Private Federated Learning using Dynamic Low-Rank Adaptation
Jie Xu
Karthikeyan P. Saravanan
Rogier van Dalen
Haaris Mehmood
David Tuckey
Mete Ozay
56
5
0
10 May 2024
The Trade-off between Performance, Efficiency, and Fairness in Adapter Modules for Text Classification
Minh Duc Bui
K. Wense
21
0
0
03 May 2024
FeDeRA:Efficient Fine-tuning of Language Models in Federated Learning Leveraging Weight Decomposition
Yuxuan Yan
Qianqian Yang
Shunpu Tang
Zhiguo Shi
27
13
0
29 Apr 2024
LoRA Dropout as a Sparsity Regularizer for Overfitting Control
Yang Lin
Xinyu Ma
Xu Chu
Yujie Jin
Zhibang Yang
Yasha Wang
Hong-yan Mei
44
19
0
15 Apr 2024
AdapterSwap: Continuous Training of LLMs with Data Removal and Access-Control Guarantees
William Fleshman
Aleem Khan
Marc Marone
Benjamin Van Durme
CLL
KELM
44
3
0
12 Apr 2024
Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation
Xinyu Ma
Xu Chu
Zhibang Yang
Yang Lin
Xin Gao
Junfeng Zhao
38
6
0
05 Apr 2024
PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models
Fanxu Meng
Zhaohui Wang
Muhan Zhang
VLM
64
68
0
03 Apr 2024
SuperLoRA: Parameter-Efficient Unified Adaptation of Multi-Layer Attention Modules
Xiangyu Chen
Jing Liu
Ye Wang
Pu Wang
Matthew Brand
Guanghui Wang
T. Koike-Akino
35
7
0
18 Mar 2024
DAM: Dynamic Adapter Merging for Continual Video QA Learning
Feng Cheng
Ziyang Wang
Yi-Lin Sung
Yan-Bo Lin
Mohit Bansal
Gedas Bertasius
CLL
MoMe
31
10
0
13 Mar 2024
FlexLLM: A System for Co-Serving Large Language Model Inference and Parameter-Efficient Finetuning
Xupeng Miao
Gabriele Oliaro
Xinhao Cheng
Vineeth Kada
Ruohan Gao
...
April Yang
Yingcheng Wang
Mengdi Wu
Colin Unger
Zhihao Jia
MoE
88
9
0
29 Feb 2024
Quantized Embedding Vectors for Controllable Diffusion Language Models
Cheng Kang
Xinye Chen
Yong Hu
Daniel Novak
21
0
0
15 Feb 2024
LinguAlchemy: Fusing Typological and Geographical Elements for Unseen Language Generalization
Muhammad Farid Adilazuarda
Samuel Cahyawijaya
Alham Fikri Aji
Genta Indra Winata
Ayu Purwarianti
19
5
0
11 Jan 2024
Sparse is Enough in Fine-tuning Pre-trained Large Language Models
Weixi Song
Z. Li
Lefei Zhang
Hai Zhao
Bo Du
VLM
19
6
0
19 Dec 2023
Tied-Lora: Enhancing parameter efficiency of LoRA with weight tying
Adithya Renduchintala
Tugrul Konuk
Oleksii Kuchaiev
MoMe
21
41
0
16 Nov 2023
Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization
Alexandra Chronopoulou
Jonas Pfeiffer
Joshua Maynez
Xinyi Wang
Sebastian Ruder
Priyanka Agrawal
MoMe
24
14
0
15 Nov 2023
Audio-AdapterFusion: A Task-ID-free Approach for Efficient and Non-Destructive Multi-task Speech Recognition
Hillary Ngai
Rohan Agrawal
Neeraj Gaur
Ronny Huang
Parisa Haghani
P. M. Mengibar
MoMe
24
0
0
17 Oct 2023
Decomposed Prompt Tuning via Low-Rank Reparameterization
Yao Xiao
Lu Xu
Jiaxi Li
Wei Lu
Xiaoli Li
VLM
13
6
0
16 Oct 2023
BanglaNLP at BLP-2023 Task 2: Benchmarking different Transformer Models for Sentiment Analysis of Bangla Social Media Posts
Saumajit Saha
Albert Nanda
19
0
0
13 Oct 2023
Federated Class-Incremental Learning with Prompting
Jiale Liu
Yu-Wei Zhan
Chong-Yu Zhang
Xin Luo
Zhen-Duo Chen
Yinwei Wei
CLL
FedML
21
2
0
13 Oct 2023
IncreLoRA: Incremental Parameter Allocation Method for Parameter-Efficient Fine-tuning
Feiyu F. Zhang
Liangzhi Li
Jun-Cheng Chen
Zhouqian Jiang
Bowen Wang
Yiming Qian
36
32
0
23 Aug 2023
1
2
3
Next