ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.12197
31
0

Beyond Patches: Mining Interpretable Part-Prototypes for Explainable AI

16 April 2025
Mahdi Alehdaghi
Rajarshi Bhattacharya
Pourya Shamsolmoali
Rafael M. O. Cruz
Maguelonne Heritier
Eric Granger
ArXivPDFHTML
Abstract

Deep learning has provided considerable advancements for multimedia systems, yet the interpretability of deep models remains a challenge. State-of-the-art post-hoc explainability methods, such as GradCAM, provide visual interpretation based on heatmaps but lack conceptual clarity. Prototype-based approaches, like ProtoPNet and PIPNet, offer a more structured explanation but rely on fixed patches, limiting their robustness and semantic consistency.To address these limitations, a part-prototypical concept mining network (PCMNet) is proposed that dynamically learns interpretable prototypes from meaningful regions. PCMNet clusters prototypes into concept groups, creating semantically grounded explanations without requiring additional annotations. Through a joint process of unsupervised part discovery and concept activation vector extraction, PCMNet effectively captures discriminative concepts and makes interpretable classification decisions.Our extensive experiments comparing PCMNet against state-of-the-art methods on multiple datasets show that it can provide a high level of interpretability, stability, and robustness under clean and occluded scenarios.

View on arXiv
@article{alehdaghi2025_2504.12197,
  title={ Beyond Patches: Mining Interpretable Part-Prototypes for Explainable AI },
  author={ Mahdi Alehdaghi and Rajarshi Bhattacharya and Pourya Shamsolmoali and Rafael M.O. Cruz and Maguelonne Heritier and Eric Granger },
  journal={arXiv preprint arXiv:2504.12197},
  year={ 2025 }
}
Comments on this paper