ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.07117
63
27

PILOT: A Pre-Trained Model-Based Continual Learning Toolbox

13 September 2023
Hai-Long Sun
Da-Wei Zhou
Han-Jia Ye
De-Chuan Zhan
    CLL
ArXivPDFHTML
Abstract

While traditional machine learning can effectively tackle a wide range of problems, it primarily operates within a closed-world setting, which presents limitations when dealing with streaming data. As a solution, incremental learning emerges to address real-world scenarios involving new data's arrival. Recently, pre-training has made significant advancements and garnered the attention of numerous researchers. The strong performance of these pre-trained models (PTMs) presents a promising avenue for developing continual learning algorithms that can effectively adapt to real-world scenarios. Consequently, exploring the utilization of PTMs in incremental learning has become essential. This paper introduces a pre-trained model-based continual learning toolbox known as PILOT. On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt. On the other hand, PILOT also fits typical class-incremental learning algorithms (e.g., DER, FOSTER, and MEMO) within the context of pre-trained models to evaluate their effectiveness.

View on arXiv
@article{sun2025_2309.07117,
  title={ PILOT: A Pre-Trained Model-Based Continual Learning Toolbox },
  author={ Hai-Long Sun and Da-Wei Zhou and De-Chuan Zhan and Han-Jia Ye },
  journal={arXiv preprint arXiv:2309.07117},
  year={ 2025 }
}
Comments on this paper