Explainability is a critical factor in enhancing the trustworthiness and acceptance of artificial intelligence (AI) in healthcare, where decisions directly impact patient outcomes. Despite advancements in AI interpretability, clear guidelines on when and to what extent explanations are required in medical applications remain lacking. We propose a novel categorization system comprising four classes of explanation necessity (self-explainable, semi-explainable, non-explainable, and new-patterns discovery), guiding the required level of explanation; whether local (patient or sample level), global (cohort or dataset level), or both. To support this system, we introduce a mathematical formulation that incorporates three key factors: (i) robustness of the evaluation protocol, (ii) variability of expert observations, and (iii) representation dimensionality of the application. This framework provides a practical tool for researchers to determine the appropriate depth of explainability needed, addressing the critical question: When does an AI medical application need to be explained, and at what level of detail?
View on arXiv@article{mamalakis2025_2406.00216, title={ The Explanation Necessity for Healthcare AI }, author={ Michail Mamalakis and Héloïse de Vareilles and Graham Murray and Pietro Lio and John Suckling }, journal={arXiv preprint arXiv:2406.00216}, year={ 2025 } }