ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.16167
  4. Cited By
Merging LoRAs like Playing LEGO: Pushing the Modularity of LoRA to
  Extremes Through Rank-Wise Clustering
v1v2v3 (latest)

Merging LoRAs like Playing LEGO: Pushing the Modularity of LoRA to Extremes Through Rank-Wise Clustering

International Conference on Learning Representations (ICLR), 2024
24 September 2024
Ziyu Zhao
Tao Shen
Didi Zhu
Zexi Li
Jing Su
Xuwu Wang
Kun Kuang
Fei Wu
    MoMe
ArXiv (abs)PDFHTML

Papers citing "Merging LoRAs like Playing LEGO: Pushing the Modularity of LoRA to Extremes Through Rank-Wise Clustering"

24 / 24 papers shown
Title
WeaveRec: An LLM-Based Cross-Domain Sequential Recommendation Framework with Model Merging
WeaveRec: An LLM-Based Cross-Domain Sequential Recommendation Framework with Model Merging
Min Hou
Xin Liu
Le Wu
Chenyi He
Hao Liu
Z. Li
Xin Li
Si Wei
MoMe
249
0
0
30 Oct 2025
FreeFuse: Multi-Subject LoRA Fusion via Auto Masking at Test Time
FreeFuse: Multi-Subject LoRA Fusion via Auto Masking at Test Time
Yaoli Liu
Yao-Xiang Ding
Kun Zhou
140
0
0
27 Oct 2025
Towards Reversible Model Merging For Low-rank Weights
Towards Reversible Model Merging For Low-rank Weights
Mohammadsajad Alipour
Mohammad Mohammadi Amiri
MoMe
129
0
0
15 Oct 2025
K-Merge: Online Continual Merging of Adapters for On-device Large Language Models
K-Merge: Online Continual Merging of Adapters for On-device Large Language Models
Donald Shenaj
Ondrej Bohdal
Taha Ceritli
Mete Ozay
Pietro Zanuttigh
Umberto Michieli
MoMeCLL
168
0
0
15 Oct 2025
HiLoRA: Adaptive Hierarchical LoRA Routing for Training-Free Domain Generalization
HiLoRA: Adaptive Hierarchical LoRA Routing for Training-Free Domain Generalization
Ziyi Han
Huanyu Wang
Zeyu Zhang
Xiangxiang Dai
Xutong Liu
J. C. Lui
108
1
0
14 Oct 2025
Black-box Model Merging for Language-Model-as-a-Service with Massive Model Repositories
Black-box Model Merging for Language-Model-as-a-Service with Massive Model Repositories
Shilian Chen
Jie Zhou
Tianyu Huai
Y. Lu
Junsong Li
...
Y. Yang
Xin Li
Qin Chen
Hang Yan
Liang He
MoMe
173
0
0
16 Sep 2025
Dropping Experts, Recombining Neurons: Retraining-Free Pruning for Sparse Mixture-of-Experts LLMs
Dropping Experts, Recombining Neurons: Retraining-Free Pruning for Sparse Mixture-of-Experts LLMs
Yixiao Zhou
Ziyu Zhao
Dongzhou Cheng
Zhiliang Wu
Jie Gui
Yi-feng Yang
Fei Wu
Yu Cheng
Hehe Fan
MoMeMoE
131
1
0
12 Sep 2025
Semantic-guided LoRA Parameters Generation
Semantic-guided LoRA Parameters Generation
Miaoge Li
Yang Chen
Zhijie Rao
Can Jiang
Jingcai Guo
OffRL
80
0
0
05 Sep 2025
Efficient Multi-Source Knowledge Transfer by Model Merging
Efficient Multi-Source Knowledge Transfer by Model Merging
Marcin Osial
Bartosz Wójcik
Bartosz Zieliñski
Sebastian Cygert
MoMe
143
0
0
26 Aug 2025
Efficient Modular Learning through Naive LoRA Summation: Leveraging Orthogonality in High-Dimensional Models
Efficient Modular Learning through Naive LoRA Summation: Leveraging Orthogonality in High-Dimensional Models
Zhanhao Cao
Clement Truong
Andrew Lizarraga
MoMe
115
0
0
16 Aug 2025
TiMoE: Time-Aware Mixture of Language Experts
TiMoE: Time-Aware Mixture of Language Experts
Robin Faro
Dongyang Fan
Tamar Alphaidze
Martin Jaggi
MoE
105
1
0
12 Aug 2025
Position: Pause Recycling LoRAs and Prioritize Mechanisms to Uncover Limits and Effectiveness
Position: Pause Recycling LoRAs and Prioritize Mechanisms to Uncover Limits and Effectiveness
Mei-Yen Chen
Thi Thu Uyen Hoang
Michael Hahn
M. Sarfraz
MoMe
181
0
0
16 Jun 2025
Come Together, But Not Right Now: A Progressive Strategy to Boost Low-Rank Adaptation
Come Together, But Not Right Now: A Progressive Strategy to Boost Low-Rank Adaptation
Zhan Zhuang
Xiequn Wang
Wei Li
Yulong Zhang
Qiushi Huang
...
Yanbin Wei
Yuhe Nie
Kede Ma
Yu Zhang
Ying Wei
233
0
0
06 Jun 2025
FedRPCA: Enhancing Federated LoRA Aggregation Using Robust PCA
FedRPCA: Enhancing Federated LoRA Aggregation Using Robust PCA
Divyansh Jhunjhunwala
Arian Raje
Madan Ravi Ganesh
Chaithanya Kumar Mummadi
Chaoqun Dong
Jiawei Zhou
Wan-Yi Lin
Gauri Joshi
Zhenzhen Li
227
0
0
01 Jun 2025
Navigating the Accuracy-Size Trade-Off with Flexible Model Merging
Navigating the Accuracy-Size Trade-Off with Flexible Model Merging
Akash Dhasade
Divyansh Jhunjhunwala
Milos Vujasinovic
Gauri Joshi
Anne-Marie Kermarrec
MoMe
236
0
0
29 May 2025
Unraveling LoRA Interference: Orthogonal Subspaces for Robust Model Merging
Unraveling LoRA Interference: Orthogonal Subspaces for Robust Model MergingAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Haobo Zhang
Jiayu Zhou
MoMe
216
0
0
28 May 2025
Permissioned LLMs: Enforcing Access Control in Large Language Models
Permissioned LLMs: Enforcing Access Control in Large Language Models
Bargav Jayaraman
Virendra J. Marathe
Hamid Mozaffari
William F. Shen
Krishnaram Kenthapadi
266
2
0
28 May 2025
One Rank at a Time: Cascading Error Dynamics in Sequential Learning
One Rank at a Time: Cascading Error Dynamics in Sequential Learning
Mahtab Alizadeh Vandchali
Fangshuo
Liao
Anastasios Kyrillidis
152
1
0
28 May 2025
MultLFG: Training-free Multi-LoRA composition using Frequency-domain Guidance
MultLFG: Training-free Multi-LoRA composition using Frequency-domain Guidance
Aniket Roy
Maitreya Suin
Ketul Shah
Rama Chellappa
192
1
0
26 May 2025
Sci-LoRA: Mixture of Scientific LoRAs for Cross-Domain Lay Paraphrasing
Sci-LoRA: Mixture of Scientific LoRAs for Cross-Domain Lay ParaphrasingAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Ming Cheng
Jiaying Gong
Hoda Eldardiry
AI4CE
171
1
0
24 May 2025
MINGLE: Mixture of Null-Space Gated Low-Rank Experts for Test-Time Continual Model Merging
MINGLE: Mixture of Null-Space Gated Low-Rank Experts for Test-Time Continual Model Merging
Zihuan Qiu
Yi Xu
Chiyuan He
Fanman Meng
Linfeng Xu
Qi Wu
Hongliang Li
CLLMoMe
320
3
0
17 May 2025
Beyond Profile: From Surface-Level Facts to Deep Persona Simulation in LLMs
Beyond Profile: From Surface-Level Facts to Deep Persona Simulation in LLMsAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Zixiao Wang
Duzhen Zhang
Ishita Agrawal
Shen Gao
Le Song
Xiuying Chen
333
5
0
18 Feb 2025
Each Rank Could be an Expert: Single-Ranked Mixture of Experts LoRA for Multi-Task Learning
Each Rank Could be an Expert: Single-Ranked Mixture of Experts LoRA for Multi-Task Learning
Ziyu Zhao
Yixiao Zhou
Didi Zhu
Zhenyuan Zhang
Xinze Wang
...
Leilei Gan
Minqin Zhu
Zhongyu Wei
Fei Wu
Yu Cheng
MoEMoMe
354
9
0
25 Jan 2025
Mitigating the Backdoor Effect for Multi-Task Model Merging via Safety-Aware Subspace
Mitigating the Backdoor Effect for Multi-Task Model Merging via Safety-Aware SubspaceInternational Conference on Learning Representations (ICLR), 2024
Jinluan Yang
Anke Tang
Didi Zhu
Ruihao Zhang
Li Shen
Leilei Gan
MoMeAAML
367
11
0
17 Oct 2024
1