Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2311.09179
Cited By
SiRA: Sparse Mixture of Low Rank Adaptation
15 November 2023
Yun Zhu
Nevan Wichers
Chu-Cheng Lin
Xinyi Wang
Tianlong Chen
Lei Shu
Han Lu
Canoee Liu
Liangchen Luo
Jindong Chen
Lei Meng
MoE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SiRA: Sparse Mixture of Low Rank Adaptation"
28 / 28 papers shown
Title
TT-LoRA MoE: Unifying Parameter-Efficient Fine-Tuning and Sparse Mixture-of-Experts
Pradip Kunwar
Minh Vu
Maanak Gupta
Mahmoud Abdelsalam
Manish Bhattarai
MoE
MoMe
37
0
0
29 Apr 2025
A Stronger Mixture of Low-Rank Experts for Fine-Tuning Foundation Models
Mengyang Sun
Yihao Wang
Tao Feng
Dan Zhang
Yifan Zhu
J. Tang
MoE
29
0
0
20 Feb 2025
Ensembles of Low-Rank Expert Adapters
Yinghao Li
Vianne Gao
Chao Zhang
MohamadAli Torkamani
57
0
0
31 Jan 2025
Each Rank Could be an Expert: Single-Ranked Mixture of Experts LoRA for Multi-Task Learning
Ziyu Zhao
Yixiao Zhou
Didi Zhu
Tao Shen
X. Wang
Jing Su
Kun Kuang
Zhongyu Wei
Fei Wu
Yu Cheng
MoE
24
1
0
28 Jan 2025
GraphLoRA: Empowering LLMs Fine-Tuning via Graph Collaboration of MoE
Ting Bai
Yue Yu
Le Huang
Zenan Xu
Zhe Zhao
Chuan Shi
MoE
72
0
0
18 Dec 2024
MoSLD: An Extremely Parameter-Efficient Mixture-of-Shared LoRAs for Multi-Task Learning
Lulu Zhao
Weihao Zeng
Xiaofeng Shi
Hua Zhou
MoMe
MoE
67
0
0
12 Dec 2024
MoDULA: Mixture of Domain-Specific and Universal LoRA for Multi-Task Learning
Yufei Ma
Zihan Liang
Huangyu Dai
B. Chen
D. Gao
...
Linbo Jin
Wen Jiang
Guannan Zhang
Xiaoyan Cai
Libin Yang
MoE
MoMe
91
1
0
10 Dec 2024
Collaborative and Efficient Personalization with Mixtures of Adaptors
Abdulla Jasem Almansoori
Samuel Horváth
Martin Takáč
FedML
33
2
0
04 Oct 2024
HDMoLE: Mixture of LoRA Experts with Hierarchical Routing and Dynamic Thresholds for Fine-Tuning LLM-based ASR Models
Bingshen Mu
Kun Wei
Qijie Shao
Yong Xu
Lei Xie
MoE
31
1
0
30 Sep 2024
MoDE: Effective Multi-task Parameter Efficient Fine-Tuning with a Mixture of Dyadic Experts
Lin Ning
Harsh Lara
Meiqi Guo
Abhinav Rastogi
MoMe
MoE
24
1
0
02 Aug 2024
PMoE: Progressive Mixture of Experts with Asymmetric Transformer for Continual Learning
Min Jae Jung
Romain Rouvoy
KELM
MoE
CLL
38
2
0
31 Jul 2024
A Survey on LoRA of Large Language Models
Yuren Mao
Yuhang Ge
Yijiang Fan
Wenyi Xu
Yu Mi
Zhonghao Hu
Yunjun Gao
ALM
52
22
0
08 Jul 2024
Retrieval-Augmented Mixture of LoRA Experts for Uploadable Machine Learning
Ziyu Zhao
Leilei Gan
Guoyin Wang
Yuwei Hu
Tao Shen
Hongxia Yang
Kun Kuang
Fei Wu
MoE
MoMe
26
1
0
24 Jun 2024
Crayon: Customized On-Device LLM via Instant Adapter Blending and Edge-Server Hybrid Inference
Jihwan Bang
Juntae Lee
Kyuhong Shim
Seunghan Yang
Simyung Chang
18
5
0
11 Jun 2024
MLAE: Masked LoRA Experts for Parameter-Efficient Fine-Tuning
Junjie Wang
Guangjing Yang
Wentao Chen
Huahui Yi
Xiaohu Wu
Qicheng Lao
MoE
ALM
26
0
0
29 May 2024
Decomposing the Neurons: Activation Sparsity via Mixture of Experts for Continual Test Time Adaptation
Rongyu Zhang
Aosong Cheng
Yulin Luo
Gaole Dai
Huanrui Yang
...
Ran Xu
Li Du
Yuan Du
Yanbing Jiang
Shanghang Zhang
MoE
TTA
35
6
0
26 May 2024
AdaMoLE: Fine-Tuning Large Language Models with Adaptive Mixture of Low-Rank Adaptation Experts
Zefang Liu
Jiahua Luo
MoE
KELM
31
2
0
01 May 2024
Weight Copy and Low-Rank Adaptation for Few-Shot Distillation of Vision Transformers
Diana-Nicoleta Grigore
Mariana-Iuliana Georgescu
J. A. Justo
T. Johansen
Andreea-Iuliana Ionescu
Radu Tudor Ionescu
18
0
0
14 Apr 2024
Intuition-aware Mixture-of-Rank-1-Experts for Parameter Efficient Finetuning
Yijiang Liu
Rongyu Zhang
Huanrui Yang
Kurt Keutzer
Yuan Du
Li Du
Shanghang Zhang
MoE
36
5
0
13 Apr 2024
ReFT: Representation Finetuning for Language Models
Zhengxuan Wu
Aryaman Arora
Zheng Wang
Atticus Geiger
Daniel Jurafsky
Christopher D. Manning
Christopher Potts
OffRL
30
58
0
04 Apr 2024
PERL: Parameter Efficient Reinforcement Learning from Human Feedback
Hakim Sidahmed
Samrat Phatale
Alex Hutcheson
Zhuonan Lin
Zhan Chen
...
Jessica Hoffmann
Hassan Mansoor
Wei Li
Abhinav Rastogi
Lucas Dixon
16
4
0
15 Mar 2024
LoraRetriever: Input-Aware LoRA Retrieval and Composition for Mixed Tasks in the Wild
Ziyu Zhao
Leilei Gan
Guoyin Wang
Wangchunshu Zhou
Hongxia Yang
Kun Kuang
Fei Wu
MoMe
15
28
0
15 Feb 2024
Model Compression and Efficient Inference for Large Language Models: A Survey
Wenxiao Wang
Wei Chen
Yicong Luo
Yongliu Long
Zhengkai Lin
Liye Zhang
Binbin Lin
Deng Cai
Xiaofei He
MQ
33
30
0
15 Feb 2024
From Sparse to Soft Mixtures of Experts
J. Puigcerver
C. Riquelme
Basil Mustafa
N. Houlsby
MoE
114
114
0
02 Aug 2023
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
236
780
0
14 Oct 2021
Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Sneha Kudugunta
Yanping Huang
Ankur Bapna
M. Krikun
Dmitry Lepikhin
Minh-Thang Luong
Orhan Firat
MoE
119
104
0
24 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,784
0
18 Apr 2021
1