ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.13822
22
0

Parameter-Efficient Continual Fine-Tuning: A Survey

18 April 2025
Eric Nuertey Coleman
Luigi Quarantiello
Ziyue Liu
Qinwen Yang
Samrat Mukherjee
J. Hurtado
Vincenzo Lomonaco
    CLL
ArXivPDFHTML
Abstract

The emergence of large pre-trained networks has revolutionized the AI field, unlocking new possibilities and achieving unprecedented performance. However, these models inherit a fundamental limitation from traditional Machine Learning approaches: their strong dependence on the \textit{i.i.d.} assumption hinders their adaptability to dynamic learning scenarios. We believe the next breakthrough in AI lies in enabling efficient adaptation to evolving environments -- such as the real world -- where new data and tasks arrive sequentially. This challenge defines the field of Continual Learning (CL), a Machine Learning paradigm focused on developing lifelong learning neural models. One alternative to efficiently adapt these large-scale models is known Parameter-Efficient Fine-Tuning (PEFT). These methods tackle the issue of adapting the model to a particular data or scenario by performing small and efficient modifications, achieving similar performance to full fine-tuning. However, these techniques still lack the ability to adjust the model to multiple tasks continually, as they suffer from the issue of Catastrophic Forgetting. In this survey, we first provide an overview of CL algorithms and PEFT methods before reviewing the state-of-the-art on Parameter-Efficient Continual Fine-Tuning (PECFT). We examine various approaches, discuss evaluation metrics, and explore potential future research directions. Our goal is to highlight the synergy between CL and Parameter-Efficient Fine-Tuning, guide researchers in this field, and pave the way for novel future research directions.

View on arXiv
@article{coleman2025_2504.13822,
  title={ Parameter-Efficient Continual Fine-Tuning: A Survey },
  author={ Eric Nuertey Coleman and Luigi Quarantiello and Ziyue Liu and Qinwen Yang and Samrat Mukherjee and Julio Hurtado and Vincenzo Lomonaco },
  journal={arXiv preprint arXiv:2504.13822},
  year={ 2025 }
}
Comments on this paper