14

Metric Hub: A metric library and practical selection workflow for use-case-driven data quality assessment in medical AI

Katinka Becker
Maximilian P. Oppelt
Tobias S. Zech
Martin Seyferth
Sandie Cabon
Vanja Miskovic
Ivan Cimrak
Michal Kozubek
Giuseppe DÁvenio
Ilaria Campioni
Jana Fehr
Kanjar De
Ismail Mahmoudi
Emilio Dolgener Cantu
Laurenz Ottmann
Andreas Klaß
Galaad Altares
Jackie Ma
Alireza Salehi M.
Nadine R. Lang-Richter
Tobias Schaeffter
Daniel Schwabe
Main:25 Pages
21 Figures
Bibliography:8 Pages
5 Tables
Appendix:73 Pages
Abstract

Machine learning (ML) in medicine has transitioned from research to concrete applications aimed at supporting several medical purposes like therapy selection, monitoring and treatment. Acceptance and effective adoption by clinicians and patients, as well as regulatory approval, require evidence of trustworthiness. A major factor for the development of trustworthy AI is the quantification of data quality for AI model training and testing. We have recently proposed the METRIC-framework for systematically evaluating the suitability (fit-for-purpose) of data for medical ML for a given task. Here, we operationalize this theoretical framework by introducing a collection of data quality metrics - the metric library - for practically measuring data quality dimensions. For each metric, we provide a metric card with the most important information, including definition, applicability, examples, pitfalls and recommendations, to support the understanding and implementation of these metrics. Furthermore, we discuss strategies and provide decision trees for choosing an appropriate set of data quality metrics from the metric library given specific use cases. We demonstrate the impact of our approach exemplarily on the PTB-XL ECG-dataset. This is a first step to enable fit-for-purpose evaluation of training and test data in practice as the base for establishing trustworthy AI in medicine.

View on arXiv
Comments on this paper