ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.02666
44
0

BECAME: BayEsian Continual Learning with Adaptive Model MErging

3 April 2025
Mei Li
Yuxiang Lu
Qinyan Dai
Suizhi Huang
Yue Ding
Hongtao Lu
    CLL
    MoMe
ArXivPDFHTML
Abstract

Continual Learning (CL) strives to learn incrementally across tasks while mitigating catastrophic forgetting. A key challenge in CL is balancing stability (retaining prior knowledge) and plasticity (learning new tasks). While representative gradient projection methods ensure stability, they often limit plasticity. Model merging techniques offer promising solutions, but prior methods typically rely on empirical assumptions and carefully selected hyperparameters. In this paper, we explore the potential of model merging to enhance the stability-plasticity trade-off, providing theoretical insights that underscore its benefits. Specifically, we reformulate the merging mechanism using Bayesian continual learning principles and derive a closed-form solution for the optimal merging coefficient that adapts to the diverse characteristics of tasks. To validate our approach, we introduce a two-stage framework named BECAME, which synergizes the expertise of gradient projection and adaptive merging. Extensive experiments show that our approach outperforms state-of-the-art CL methods and existing merging strategies.

View on arXiv
@article{li2025_2504.02666,
  title={ BECAME: BayEsian Continual Learning with Adaptive Model MErging },
  author={ Mei Li and Yuxiang Lu and Qinyan Dai and Suizhi Huang and Yue Ding and Hongtao Lu },
  journal={arXiv preprint arXiv:2504.02666},
  year={ 2025 }
}
Comments on this paper