Mitigating the Effect of Dataset Bias on Training Deep Models for
Biomedical Images
- OOD
Deep learning has gained tremendous attention on computer-aided diagnosis applications, particularly biomedical imaging analysis. However, medical datasets are subject to dataset bias problem where data, of same modality and body-part, show different distributions across each institution. Such bias may arise from various confounding factors including operation policies, machine protocols, treatment preference and etc. Consequently, machine learning models train on one hospital sites cannot confidently generalize to the others. In this study, we analyzed three large-scale public Chest X-ray datasets and found that vanilla training of deep models on diagnosing common Thorax Diseases were having exactly the above mentioned dataset bias problem. To mitigate the bias effect, we framed the problem as multi-source domain generalization task and made two contributions: 1. we improved the classical Bias-regularized Learning method by designing a new loss function; 2. we proposed a new domain-guided data argumentation method called MCT (Multi-layer Cross-gradient Training) for synthesizing data of unseen domains. Our model can be deployed directly to new domain data without retraining, meanwhile achieving much less performance degradation compared to other baselines such as train-them-all-together. Empirical studies verified the effectiveness of our methods both quantitatively and qualitatively. Our demo training code is publicly available.
View on arXiv