False Promises in Medical Imaging AI? Assessing Validity of Outperformance Claims

Performance comparisons are fundamental in medical imaging Artificial Intelligence (AI) research, often driving claims of superiority based on relative improvements in common performance metrics. However, such claims frequently rely solely on empirical mean performance. In this paper, we investigate whether newly proposed methods genuinely outperform the state of the art by analyzing a representative cohort of medical imaging papers. We quantify the probability of false claims based on a Bayesian approach that leverages reported results alongside empirically estimated model congruence to estimate whether the relative ranking of methods is likely to have occurred by chance. According to our results, the majority (>80%) of papers claims outperformance when introducing a new method. Our analysis further revealed a high probability (>5%) of false outperformance claims in 86% of classification papers and 53% of segmentation papers. These findings highlight a critical flaw in current benchmarking practices: claims of outperformance in medical imaging AI are frequently unsubstantiated, posing a risk of misdirecting future research efforts.
View on arXiv@article{christodoulou2025_2505.04720, title={ False Promises in Medical Imaging AI? Assessing Validity of Outperformance Claims }, author={ Evangelia Christodoulou and Annika Reinke and Pascaline Andrè and Patrick Godau and Piotr Kalinowski and Rola Houhou and Selen Erkan and Carole H. Sudre and Ninon Burgos and Sofiène Boutaj and Sophie Loizillon and Maëlys Solal and Veronika Cheplygina and Charles Heitz and Michal Kozubek and Michela Antonelli and Nicola Rieke and Antoine Gilson and Leon D. Mayer and Minu D. Tizabi and M. Jorge Cardoso and Amber Simpson and Annette Kopp-Schneider and Gaël Varoquaux and Olivier Colliot and Lena Maier-Hein }, journal={arXiv preprint arXiv:2505.04720}, year={ 2025 } }