ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.23283
50
0

Language Guided Concept Bottleneck Models for Interpretable Continual Learning

30 March 2025
Lu Yu
Haoyu Han
Zhe Tao
Hantao Yao
Changsheng Xu
    CLL
ArXivPDFHTML
Abstract

Continual learning (CL) aims to enable learning systems to acquire new knowledge constantly without forgetting previously learned information. CL faces the challenge of mitigating catastrophic forgetting while maintaining interpretability across tasks. Most existing CL methods focus primarily on preserving learned knowledge to improve model performance. However, as new information is introduced, the interpretability of the learning process becomes crucial for understanding the evolving decision-making process, yet it is rarely explored. In this paper, we introduce a novel framework that integrates language-guided Concept Bottleneck Models (CBMs) to address both challenges. Our approach leverages the Concept Bottleneck Layer, aligning semantic consistency with CLIP models to learn human-understandable concepts that can generalize across tasks. By focusing on interpretable concepts, our method not only enhances the models ability to retain knowledge over time but also provides transparent decision-making insights. We demonstrate the effectiveness of our approach by achieving superior performance on several datasets, outperforming state-of-the-art methods with an improvement of up to 3.06% in final average accuracy on ImageNet-subset. Additionally, we offer concept visualizations for model predictions, further advancing the understanding of interpretable continual learning.

View on arXiv
@article{yu2025_2503.23283,
  title={ Language Guided Concept Bottleneck Models for Interpretable Continual Learning },
  author={ Lu Yu and Haoyu Han and Zhe Tao and Hantao Yao and Changsheng Xu },
  journal={arXiv preprint arXiv:2503.23283},
  year={ 2025 }
}
Comments on this paper