Trustworthy artificial intelligence (AI) is essential in healthcare, particularly for high-stakes tasks like medical image segmentation. Explainable AI and uncertainty quantification significantly enhance AI reliability by addressing key attributes such as robustness, usability, and explainability. Despite extensive technical advances in uncertainty quantification for medical imaging, understanding the clinical informativeness and interpretability of uncertainty remains limited. This study introduces a novel framework to explain the potential sources of predictive uncertainty, specifically in cortical lesion segmentation in multiple sclerosis using deep ensembles. The proposed analysis shifts the focus from the uncertainty-error relationship towards relevant medical and engineering factors. Our findings reveal that instance-wise uncertainty is strongly related to lesion size, shape, and cortical involvement. Expert rater feedback confirms that similar factors impede annotator confidence. Evaluations conducted on two datasets (206 patients, almost 2000 lesions) under both in-domain and distribution-shift conditions highlight the utility of the framework in different scenarios.
View on arXiv@article{molchanova2025_2504.04814, title={ Explainability of AI Uncertainty: Application to Multiple Sclerosis Lesion Segmentation on MRI }, author={ Nataliia Molchanova and Pedro M. Gordaliza and Alessandro Cagol and Mario Ocampo--Pineda and Po--Jui Lu and Matthias Weigel and Xinjie Chen and Erin S. Beck and Haris Tsagkas and Daniel Reich and Anna Stölting and Pietro Maggi and Delphine Ribes and Adrien Depeursinge and Cristina Granziera and Henning Müller and Meritxell Bach Cuadra }, journal={arXiv preprint arXiv:2504.04814}, year={ 2025 } }