ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.17663
39
1

Explanation Bottleneck Models

26 September 2024
Shinýa Yamaguchi
Kosuke Nishida
    LRM
    BDL
ArXivPDFHTML
Abstract

Recent concept-based interpretable models have succeeded in providing meaningful explanations by pre-defined concept sets. However, the dependency on the pre-defined concepts restricts the application because of the limited number of concepts for explanations. This paper proposes a novel interpretable deep neural network called explanation bottleneck models (XBMs). XBMs generate a text explanation from the input without pre-defined concepts and then predict a final task prediction based on the generated explanation by leveraging pre-trained vision-language encoder-decoder models. To achieve both the target task performance and the explanation quality, we train XBMs through the target task loss with the regularization penalizing the explanation decoder via the distillation from the frozen pre-trained decoder. Our experiments, including a comparison to state-of-the-art concept bottleneck models, confirm that XBMs provide accurate and fluent natural language explanations without pre-defined concept sets. Code is available atthis https URL.

View on arXiv
@article{yamaguchi2025_2409.17663,
  title={ Explanation Bottleneck Models },
  author={ Shin'ya Yamaguchi and Kosuke Nishida },
  journal={arXiv preprint arXiv:2409.17663},
  year={ 2025 }
}
Comments on this paper