ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.21272
47
0

Reinforced Model Merging

27 March 2025
J. N. Han
Jingwen Ye
Shunyu Liu
Haofei Zhang
Jie Song
Zunlei Feng
Mingli Song
    MoMe
ArXivPDFHTML
Abstract

The success of large language models has garnered widespread attention for model merging techniques, especially training-free methods which combine model capabilities within the parameter space. However, two challenges remain: (1) uniform treatment of all parameters leads to performance degradation; (2) search-based algorithms are often inefficient. In this paper, we present an innovative framework termed Reinforced Model Merging (RMM), which encompasses an environment and agent tailored for merging tasks. These components interact to execute layer-wise merging actions, aiming to search the optimal merging architecture. Notably, RMM operates without any gradient computations on the original models, rendering it feasible for edge devices. Furthermore, by utilizing data subsets during the evaluation process, we addressed the bottleneck in the reward feedback phase, thereby accelerating RMM by up to 100 times. Extensive experiments demonstrate that RMM achieves state-of-the-art performance across various vision and NLP datasets and effectively overcomes the limitations of the existing baseline methods. Our code is available atthis https URL.

View on arXiv
@article{han2025_2503.21272,
  title={ Reinforced Model Merging },
  author={ Jiaqi Han and Jingwen Ye and Shunyu Liu and Haofei Zhang and Jie Song and Zunlei Feng and Mingli Song },
  journal={arXiv preprint arXiv:2503.21272},
  year={ 2025 }
}
Comments on this paper