665

Progress and Prospects for Fairness in Healthcare and Medical Image Analysis

Abstract

Machine learning-enabled medical imaging analysis has become a vital part of the current automatic diagnosis system. However, machine learning models have been shown to demonstrate a systematic bias towards certain subgroups of people, e.g., giving a worse predictive performance to old females. It is harmful and dangerous in such a sensitive area and therefore researchers have been working on developing bias mitigation algorithms to address the fairness issue in the general machine learning field. However, given the specific characteristics of medical imaging, fairness in medical image analysis (MedIA) requires additional efforts. Hence, in this survey, we give a comprehensive review of the current progress of fairness study and that in MedIA. Specifically, we first discuss the definitions of fairness and analyze the source of bias in medical imaging. Then, we discuss current research on fairness for MedIA and present a collection of public medical imaging datasets that can be used for evaluating fairness in MedIA. Furthermore, we conduct extensive experiments to evaluate the fairness of several different tasks for medical imaging, including classification, object detection, and landmark detection. Finally, we discuss the challenges and potential future directions in developing fair MedIA.

View on arXiv
Comments on this paper