ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.04834
52
1

Extrapolation Merging: Keep Improving With Extrapolation and Merging

5 March 2025
Yiguan Lin
Bin Xu
Yinghao Li
Yang Gao
    MoMe
ArXivPDFHTML
Abstract

Large Language Models (LLMs) require instruction fine-tuning to perform different downstream tasks. However, the instruction fine-tuning phase still demands significant computational resources and labeled data, lacking a paradigm that can improve model performance without additional computational power and data. Model merging aims to enhance performance by combining the parameters of different models, but the lack of a clear optimization direction during the merging process does not always guarantee improved performance. In this paper, we attempt to provide a clear optimization direction for model merging. We first validate the effectiveness of the model extrapolation method during the instruction fine-tuning phase. Then, we propose Extrapolation Merging, a paradigm that can continue improving model performance without requiring extra computational resources or data. Using the extrapolation method, we provide a clear direction for model merging, achieving local optimization search, and consequently enhancing the merged model's performance. We conduct experiments on seven different tasks, and the results show that our method can consistently improve the model's performance after fine-tuning.

View on arXiv
@article{lin2025_2503.04834,
  title={ Extrapolation Merging: Keep Improving With Extrapolation and Merging },
  author={ Yiguan Lin and Bin Xu and Yinghao Li and Yang Gao },
  journal={arXiv preprint arXiv:2503.04834},
  year={ 2025 }
}
Comments on this paper