ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.10749
30
1

LoRE-Merging: Exploring Low-Rank Estimation For Large Language Model Merging

15 February 2025
Zehua Liu
Han Wu
Yuxuan Yao
Ruifeng She
Xiongwei Han
Tao Zhong
M. Yuan
    MoMe
ArXivPDFHTML
Abstract

While most current approaches rely on further training techniques, such as fine-tuning or reinforcement learning, to enhance model capacities, model merging stands out for its ability of improving models without requiring any additional training. In this paper, we propose a unified framework for model merging based on low-rank estimation of task vectors without the need for access to the base model, named \textsc{LoRE-Merging}. Our approach is motivated by the observation that task vectors from fine-tuned models frequently exhibit a limited number of dominant singular values, making low-rank estimations less prone to interference. We implement the method by formulating the merging problem as an optimization problem. Extensive empirical experiments demonstrate the effectiveness of our framework in mitigating interference and preserving task-specific information, thereby advancing the state-of-the-art performance in model merging techniques.

View on arXiv
@article{liu2025_2502.10749,
  title={ LoRE-Merging: Exploring Low-Rank Estimation For Large Language Model Merging },
  author={ Zehua Liu and Han Wu and Yuxuan Yao and Ruifeng She and Xiongwei Han and Tao Zhong and Mingxuan Yuan },
  journal={arXiv preprint arXiv:2502.10749},
  year={ 2025 }
}
Comments on this paper