Deep learning has provided considerable advancements for multimedia systems, yet the interpretability of deep models remains a challenge. State-of-the-art post-hoc explainability methods, such as GradCAM, provide visual interpretation based on heatmaps but lack conceptual clarity. Prototype-based approaches, like ProtoPNet and PIPNet, offer a more structured explanation but rely on fixed patches, limiting their robustness and semantic consistency.To address these limitations, a part-prototypical concept mining network (PCMNet) is proposed that dynamically learns interpretable prototypes from meaningful regions. PCMNet clusters prototypes into concept groups, creating semantically grounded explanations without requiring additional annotations. Through a joint process of unsupervised part discovery and concept activation vector extraction, PCMNet effectively captures discriminative concepts and makes interpretable classification decisions.Our extensive experiments comparing PCMNet against state-of-the-art methods on multiple datasets show that it can provide a high level of interpretability, stability, and robustness under clean and occluded scenarios.
View on arXiv@article{alehdaghi2025_2504.12197, title={ Beyond Patches: Mining Interpretable Part-Prototypes for Explainable AI }, author={ Mahdi Alehdaghi and Rajarshi Bhattacharya and Pourya Shamsolmoali and Rafael M.O. Cruz and Maguelonne Heritier and Eric Granger }, journal={arXiv preprint arXiv:2504.12197}, year={ 2025 } }