ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.15828
24
0

A Stronger Mixture of Low-Rank Experts for Fine-Tuning Foundation Models

20 February 2025
Mengyang Sun
Yihao Wang
Tao Feng
Dan Zhang
Yifan Zhu
J. Tang
    MoE
ArXivPDFHTML
Abstract

In order to streamline the fine-tuning of foundation models, Low-Rank Adapters (LoRAs) have been substantially adopted across various fields, including instruction tuning and domain adaptation. The underlying concept of LoRA involves decomposing a full-rank matrix into the product of two lower-rank matrices, which reduces storage consumption and accelerates the training process. Furthermore, to address the limited expressive capacity of LoRA, the Mixture-of-Expert (MoE) has been introduced for incorporating multiple LoRA adapters. The integration of LoRA experts leads to a visible improvement across several downstream scenes. However, the mixture of LoRAs (MoE-LoRA) still exhibits its low robustness during tuning and inferring. Inspired by the Riemannian Preconditioners which train LoRA as a sub-space projector, we propose a new training strategy for MoE-LoRA, to stabilize and boost its feature learning procedure by multi-space projections. Examinations on SGD and AdamW optimizers demonstrate the effectiveness of our methodology. Source code is available atthis https URL.

View on arXiv
@article{sun2025_2502.15828,
  title={ A Stronger Mixture of Low-Rank Experts for Fine-Tuning Foundation Models },
  author={ Mengyang Sun and Yihao Wang and Tao Feng and Dan Zhang and Yifan Zhu and Jie Tang },
  journal={arXiv preprint arXiv:2502.15828},
  year={ 2025 }
}
Comments on this paper