ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.13584
82
0

Towards Robust Incremental Learning under Ambiguous Supervision

23 January 2025
Rui Wang
Mingxuan Xia
Chang Yao
Lei Feng
Junbo Zhao
Gang Chen
Haobo Wang
    CLL
ArXivPDFHTML
Abstract

Traditional Incremental Learning (IL) targets to handle sequential fully-supervised learning problems where novel classes emerge from time to time. However, due to inherent annotation uncertainty and ambiguity, collecting high-quality annotated data in a dynamic learning system can be extremely expensive. To mitigate this problem, we propose a novel weakly-supervised learning paradigm called Incremental Partial Label Learning (IPLL), where the sequentially arrived data relate to a set of candidate labels rather than the ground truth. Technically, we develop the Prototype-Guided Disambiguation and Replay Algorithm (PGDR) which leverages the class prototypes as a proxy to mitigate two intertwined challenges in IPLL, i.e., label ambiguity and catastrophic forgetting. To handle the former, PGDR encapsulates a momentum-based pseudo-labeling algorithm along with prototype-guided initialization, resulting in a balanced perception of classes. To alleviate forgetting, we develop a memory replay technique that collects well-disambiguated samples while maintaining representativeness and diversity. By jointly distilling knowledge from curated memory data, our framework exhibits a great disambiguation ability for samples of new tasks and achieves less forgetting of knowledge. Extensive experiments demonstrate that PGDR achieves superior

View on arXiv
@article{wang2025_2501.13584,
  title={ Towards Robust Incremental Learning under Ambiguous Supervision },
  author={ Rui Wang and Mingxuan Xia and Chang Yao and Lei Feng and Junbo Zhao and Gang Chen and Haobo Wang },
  journal={arXiv preprint arXiv:2501.13584},
  year={ 2025 }
}
Comments on this paper