ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.08636
68
0

Birds look like cars: Adversarial analysis of intrinsically interpretable deep learning

11 March 2025
Hubert Baniecki
P. Biecek
    AAML
ArXivPDFHTML
Abstract

A common belief is that intrinsically interpretable deep learning models ensure a correct, intuitive understanding of their behavior and offer greater robustness against accidental errors or intentional manipulation. However, these beliefs have not been comprehensively verified, and growing evidence casts doubt on them. In this paper, we highlight the risks related to overreliance and susceptibility to adversarial manipulation of these so-called "intrinsically (aka inherently) interpretable" models by design. We introduce two strategies for adversarial analysis with prototype manipulation and backdoor attacks against prototype-based networks, and discuss how concept bottleneck models defend against these attacks. Fooling the model's reasoning by exploiting its use of latent prototypes manifests the inherent uninterpretability of deep neural networks, leading to a false sense of security reinforced by a visual confirmation bias. The reported limitations of prototype-based networks put their trustworthiness and applicability into question, motivating further work on the robustness and alignment of (deep) interpretable models.

View on arXiv
@article{baniecki2025_2503.08636,
  title={ Birds look like cars: Adversarial analysis of intrinsically interpretable deep learning },
  author={ Hubert Baniecki and Przemyslaw Biecek },
  journal={arXiv preprint arXiv:2503.08636},
  year={ 2025 }
}
Comments on this paper