Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2404.13628
Cited By
Mixture of LoRA Experts
21 April 2024
Xun Wu
Shaohan Huang
Furu Wei
MoMe
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Mixture of LoRA Experts"
15 / 15 papers shown
Title
NoEsis: Differentially Private Knowledge Transfer in Modular LLM Adaptation
Rob Romijnders
Stefanos Laskaridis
Ali Shahin Shamsabadi
Hamed Haddadi
54
0
0
25 Apr 2025
Communication-Efficient and Personalized Federated Foundation Model Fine-Tuning via Tri-Matrix Adaptation
Y. Li
Bo Liu
Sheng Huang
Z. Zhang
Xiaotong Yuan
Richang Hong
41
0
0
31 Mar 2025
CAMEx: Curvature-aware Merging of Experts
Dung V. Nguyen
Minh H. Nguyen
Luc Q. Nguyen
R. Teo
T. Nguyen
Linh Duy Tran
MoMe
73
2
0
26 Feb 2025
Rank Also Matters: Hierarchical Configuration for Mixture of Adapter Experts in LLM Fine-Tuning
Peizhuang Cong
Wenpu Liu
Wenhan Yu
Haochen Zhao
Tong Yang
ALM
MoE
74
0
0
06 Feb 2025
Task Arithmetic in Trust Region: A Training-Free Model Merging Approach to Navigate Knowledge Conflicts
Wenju Sun
Qingyong Li
Wen Wang
Yangli-ao Geng
Boyang Li
36
2
0
28 Jan 2025
Closed-form merging of parameter-efficient modules for Federated Continual Learning
Riccardo Salami
Pietro Buzzega
Matteo Mosconi
Jacopo Bonato
Luigi Sabetta
Simone Calderara
FedML
MoMe
CLL
29
2
0
23 Oct 2024
Scalable Multi-Domain Adaptation of Language Models using Modular Experts
Peter Schafhalter
Shun Liao
Yanqi Zhou
Chih-Kuan Yeh
Arun Kandoor
James Laudon
MoE
24
1
0
14 Oct 2024
Functional-level Uncertainty Quantification for Calibrated Fine-tuning on LLMs
Ruijia Niu
D. Wu
Rose Yu
Yi-An Ma
18
1
0
09 Oct 2024
DLP-LoRA: Efficient Task-Specific LoRA Fusion with a Dynamic, Lightweight Plugin for Large Language Models
Yuxuan Zhang
Ruizhe Li
MoMe
43
0
0
02 Oct 2024
HDMoLE: Mixture of LoRA Experts with Hierarchical Routing and Dynamic Thresholds for Fine-Tuning LLM-based ASR Models
Bingshen Mu
Kun Wei
Qijie Shao
Yong Xu
Lei Xie
MoE
34
1
0
30 Sep 2024
On-Device Collaborative Language Modeling via a Mixture of Generalists and Specialists
Dongyang Fan
Bettina Messmer
N. Doikov
Martin Jaggi
MoMe
MoE
42
1
0
20 Sep 2024
MoWE-Audio: Multitask AudioLLMs with Mixture of Weak Encoders
W. Zhang
Shuo Sun
Bin Wang
Xunlong Zou
Zhuohan Liu
Yingxu He
Geyu Lin
Nancy F. Chen
A. Aw
AuLLM
65
1
0
10 Sep 2024
Solving Token Gradient Conflict in Mixture-of-Experts for Large Vision-Language Model
Longrong Yang
Dong Shen
Chaoxiang Cai
Fan Yang
Size Li
Di Zhang
Xi Li
MoE
41
2
0
28 Jun 2024
Towards Modular LLMs by Building and Reusing a Library of LoRAs
O. Ostapenko
Zhan Su
E. Ponti
Laurent Charlin
Nicolas Le Roux
Matheus Pereira
Lucas Page-Caccia
Alessandro Sordoni
MoMe
32
30
0
18 May 2024
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,784
0
18 Apr 2021
1