Using Out-of-the-Box Frameworks for Contrastive Unpaired Image Translation for Vestibular Schwannoma and Cochlea Segmentation: An approach for the crossMoDA Challenge

Abstract
The purpose of this study is to apply and evaluate out-of-the-box deep learning frameworks for the crossMoDA challenge. We use the CUT model, a model for unpaired image-to-image translation based on patchwise contrastive learning and adversarial learning, for domain adaptation from contrast-enhanced T1 MR to high-resolution T2 MR. As data augmentation, we generate additional images with vestibular schwannomas with lower signal intensity. For the segmentation task, we use the nnU-Net framework. Our final submission achieved mean Dice scores of 0.8299 in the validation phase and 0.8253 in the test phase. Our method ranked 3rd in the crossMoDA challenge.
View on arXivComments on this paper