312

Learning to Generalize towards Unseen Domains via a Content-Aware Style Invariant Model for Disease Detection from Chest X-rays

IEEE journal of biomedical and health informatics (IEEE JBHI), 2023
Abstract

Performance degradation due to source domain mismatch is a longstanding challenge in deep learning-based medical image analysis, particularly for chest X-rays (CXRs). Several methods (e.g., adversarial training, multi-domain mixups) have been proposed to extract domain-invariant high-level features to address this domain shift. However, these methods do not explicitly regularize the content and style characteristics of the extracted domain-invariant features. Recent studies have demonstrated that CNN models exhibit a strong bias toward styles (e.g., uninformative textures) rather than content (e.g., shape), in stark contrast to the human-vision system. Radiologists tend to learn visual cues from CXRs and thus perform well across multiple domains. Therefore, in medical imaging for pathology diagnosis from CXR images, models should extract domain-invariant features that are style-invariant and content-biased. Motivated by this, we employ the novel style randomization modules (SRMs) at both image and feature levels that work together hierarchically to create rich style perturbed features on the fly while keeping the content intact. In addition, we leverage consistency regularizations between global semantic features and predicted probability distributions, respectively, for with and without style perturbed versions of the same CXR image to tweak the model's sensitivity toward content markers for accurate predictions. Extensive experiments with three large-scale thoracic disease datasets, i.e., CheXpert, MIMIC-CXR, and BRAX, demonstrate that our proposed framework is more robust in the presence of domain shift and achieves state-of-the-art performance.

View on arXiv
Comments on this paper