ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.18042
57
0

DualCP: Rehearsal-Free Domain-Incremental Learning via Dual-Level Concept Prototype

23 March 2025
Qiang Wang
Yuhang He
Songlin Dong
Xiang Song
Jizhou Han
Haoyu Luo
Yihong Gong
    CLL
ArXivPDFHTML
Abstract

Domain-Incremental Learning (DIL) enables vision models to adapt to changing conditions in real-world environments while maintaining the knowledge acquired from previous domains. Given privacy concerns and training time, Rehearsal-Free DIL (RFDIL) is more practical. Inspired by the incremental cognitive process of the human brain, we design Dual-level Concept Prototypes (DualCP) for each class to address the conflict between learning new knowledge and retaining old knowledge in RFDIL. To construct DualCP, we propose a Concept Prototype Generator (CPG) that generates both coarse-grained and fine-grained prototypes for each class. Additionally, we introduce a Coarse-to-Fine calibrator (C2F) to align image features with DualCP. Finally, we propose a Dual Dot-Regression (DDR) loss function to optimize our C2F module. Extensive experiments on the DomainNet, CDDB, and CORe50 datasets demonstrate the effectiveness of our method.

View on arXiv
@article{wang2025_2503.18042,
  title={ DualCP: Rehearsal-Free Domain-Incremental Learning via Dual-Level Concept Prototype },
  author={ Qiang Wang and Yuhang He and SongLin Dong and Xiang Song and Jizhou Han and Haoyu Luo and Yihong Gong },
  journal={arXiv preprint arXiv:2503.18042},
  year={ 2025 }
}
Comments on this paper