Multi-level Asymmetric Contrastive Learning for Volumetric Medical Image Segmentation Pre-training

Medical image segmentation is a fundamental yet challenging task due to the arduous process of acquiring large volumes of high-quality labeled data from experts. Contrastive learning offers a promising but still problematic solution to this dilemma. Firstly existing medical contrastive learning strategies focus on extracting image-level representation, which ignores abundant multi-level representations. Furthermore they underutilize the decoder either by random initialization or separate pre-training from the encoder, thereby neglecting the potential collaboration between the encoder and decoder. To address these issues, we propose a novel multi-level asymmetric contrastive learning framework named MACL for volumetric medical image segmentation pre-training. Specifically, we design an asymmetric contrastive learning structure to pre-train encoder and decoder simultaneously to provide better initialization for segmentation models. Moreover, we develop a multi-level contrastive learning strategy that integrates correspondences across feature-level, image-level, and pixel-level representations to ensure the encoder and decoder capture comprehensive details from representations of varying scales and granularities during the pre-training phase. Finally, experiments on 8 medical image datasets indicate our MACL framework outperforms existing 11 contrastive learning strategies. i.e. Our MACL achieves a superior performance with more precise predictions from visualization figures and 1.72%, 7.87%, 2.49% and 1.48% Dice higher than previous best results on ACDC, MMWHS, HVSMR and CHAOS with 10% labeled data, respectively. And our MACL also has a strong generalization ability among 5 variant U-Net backbones. Our code will be released atthis https URL.
View on arXiv@article{zeng2025_2309.11876, title={ Multi-level Asymmetric Contrastive Learning for Volumetric Medical Image Segmentation Pre-training }, author={ Shuang Zeng and Lei Zhu and Xinliang Zhang and Micky C Nnamdi and Wenqi Shi and J Ben Tamo and Qian Chen and Hangzhou He and Lujia Jin and Zifeng Tian and Qiushi Ren and Zhaoheng Xie and Yanye Lu }, journal={arXiv preprint arXiv:2309.11876}, year={ 2025 } }