ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.01595
29
0

STAR: Stability-Inducing Weight Perturbation for Continual Learning

3 March 2025
Masih Eskandar
Tooba Imtiaz
Davin Hill
Zifeng Wang
Jennifer Dy
    CLL
ArXivPDFHTML
Abstract

Humans can naturally learn new and varying tasks in a sequential manner. Continual learning is a class of learning algorithms that updates its learned model as it sees new data (on potentially new tasks) in a sequence. A key challenge in continual learning is that as the model is updated to learn new tasks, it becomes susceptible to catastrophic forgetting, where knowledge of previously learned tasks is lost. A popular approach to mitigate forgetting during continual learning is to maintain a small buffer of previously-seen samples and to replay them during training. However, this approach is limited by the small buffer size, and while forgetting is reduced, it is still present. In this paper, we propose a novel loss function, STAR, that exploits the worst-case parameter perturbation that reduces the KL-divergence of model predictions with that of its local parameter neighborhood to promote stability and alleviate forgetting. STAR can be combined with almost any existing rehearsal-based method as a plug-and-play component. We empirically show that STAR consistently improves the performance of existing methods by up to 15% across varying baselines and achieves superior or competitive accuracy to that of state-of-the-art methods aimed at improving rehearsal-based continual learning.

View on arXiv
@article{eskandar2025_2503.01595,
  title={ STAR: Stability-Inducing Weight Perturbation for Continual Learning },
  author={ Masih Eskandar and Tooba Imtiaz and Davin Hill and Zifeng Wang and Jennifer Dy },
  journal={arXiv preprint arXiv:2503.01595},
  year={ 2025 }
}
Comments on this paper