Integrating Explainable AI in Medical Devices: Technical, Clinical and Regulatory Insights and Recommendations

There is a growing demand for the use of Artificial Intelligence (AI) and Machine Learning (ML) in healthcare, particularly as clinical decision support systems to assist medical professionals. However, the complexity of many of these models, often referred to as black box models, raises concerns about their safe integration into clinical settings as it is difficult to understand how they arrived at their predictions. This paper discusses insights and recommendations derived from an expert working group convened by the UK Medicine and Healthcare products Regulatory Agency (MHRA). The group consisted of healthcare professionals, regulators, and data scientists, with a primary focus on evaluating the outputs from different AI algorithms in clinical decision-making contexts. Additionally, the group evaluated findings from a pilot study investigating clinicians' behaviour and interaction with AI methods during clinical diagnosis. Incorporating AI methods is crucial for ensuring the safety and trustworthiness of medical AI devices in clinical settings. Adequate training for stakeholders is essential to address potential issues, and further insights and recommendations for safely adopting AI systems in healthcare settings are provided.
View on arXiv@article{alattal2025_2505.06620, title={ Integrating Explainable AI in Medical Devices: Technical, Clinical and Regulatory Insights and Recommendations }, author={ Dima Alattal and Asal Khoshravan Azar and Puja Myles and Richard Branson and Hatim Abdulhussein and Allan Tucker }, journal={arXiv preprint arXiv:2505.06620}, year={ 2025 } }