ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.17510
33
1

Recurrent Knowledge Identification and Fusion for Language Model Continual Learning

22 February 2025
Yujie Feng
Xujia Wang
Zexin Lu
Shenghong Fu
Guangyuan Shi
Yongxin Xu
Yasha Wang
Philip S. Yu
Xu Chu
Xiao-Ming Wu
    CLL
    KELM
ArXivPDFHTML
Abstract

Continual learning (CL) is crucial for deploying large language models (LLMs) in dynamic real-world environments without costly retraining. While recent model ensemble and model merging methods guided by parameter importance have gained popularity, they often struggle to balance knowledge transfer and forgetting, mainly due to the reliance on static importance estimates during sequential training. In this paper, we present Recurrent-KIF, a novel CL framework for Recurrent Knowledge Identification and Fusion, which enables dynamic estimation of parameter importance distributions to enhance knowledge transfer. Inspired by human continual learning, Recurrent-KIF employs an inner loop that rapidly adapts to new tasks while identifying important parameters, coupled with an outer loop that globally manages the fusion of new and historical knowledge through redundant knowledge pruning and key knowledge merging. These inner-outer loops iteratively perform multiple rounds of fusion, allowing Recurrent-KIF to leverage intermediate training information and adaptively adjust fusion strategies based on evolving importance distributions. Extensive experiments on two CL benchmarks with various model sizes (from 770M to 13B) demonstrate that Recurrent-KIF effectively mitigates catastrophic forgetting and enhances knowledge transfer.

View on arXiv
@article{feng2025_2502.17510,
  title={ Recurrent Knowledge Identification and Fusion for Language Model Continual Learning },
  author={ Yujie Feng and Xujia Wang and Zexin Lu and Shenghong Fu and Guangyuan Shi and Yongxin Xu and Yasha Wang and Philip S. Yu and Xu Chu and Xiao-Ming Wu },
  journal={arXiv preprint arXiv:2502.17510},
  year={ 2025 }
}
Comments on this paper