149

DiamondGAN: Unified Multi-Modal Generative Adversarial Networks for MRI Sequences Synthesis

International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2019
Abstract

Synthesizing MR imaging sequences is highly attractive for clinical practice, as often single sequences are missing or of poor quality (e.g. due to motion). Naturally, the idea arises that a target modality would benefit from multi-modal input. However, existing methods fail to scale up to non-aligned image volumes with multiple modalities, facing common drawbacks of complex multi-modal imaging sequences. We propose a novel, scalable and multi-modal approach called DiamondGAN. Our model is capable of performing flexible non-aligned cross-modality synthesis and data infill, when given multiple modalities or any of their arbitrary subsets, learning structured information in an end-to-end fashion. We synthesize two MRI sequences with clinical relevance (i.e., double inversion recovery (DIR) and contrast-enhanced T1 (T1-c)), reconstructed from three common MRI sequences. In addition, we perform multi-rater visual evaluation experiment and find that trained radiologists are unable to distinguish our synthetic DIR images from real ones.

View on arXiv
Comments on this paper