Introducing 3D Representation for Medical Image Volume-to-Volume Translation via Score Fusion

In volume-to-volume translations in medical images, existing models often struggle to capture the inherent volumetric distribution using 3D voxelspace representations, due to high computational dataset demands. We present Score-Fusion, a novel volumetric translation model that effectively learns 3D representations by ensembling perpendicularly trained 2D diffusion models in score function space. By carefully initializing our model to start with an average of 2D models as in TPDM, we reduce 3D training to a fine-tuning process and thereby mitigate both computational and data demands. Furthermore, we explicitly design the 3D model's hierarchical layers to learn ensembles of 2D features, further enhancing efficiency and performance. Moreover, Score-Fusion naturally extends to multi-modality settings, by fusing diffusion models conditioned on different inputs for flexible, accurate integration. We demonstrate that 3D representation is essential for better performance in downstream recognition tasks, such as tumor segmentation, where most segmentation models are based on 3D representation. Extensive experiments demonstrate that Score-Fusion achieves superior accuracy and volumetric fidelity in 3D medical image super-resolution and modality translation. Beyond these improvements, our work also provides broader insight into learning-based approaches for score function fusion.
View on arXiv@article{zhu2025_2501.07430, title={ Introducing 3D Representation for Medical Image Volume-to-Volume Translation via Score Fusion }, author={ Xiyue Zhu and Dou Hoon Kwark and Ruike Zhu and Kaiwen Hong and Yiqi Tao and Shirui Luo and Yudu Li and Zhi-Pei Liang and Volodymyr Kindratenko }, journal={arXiv preprint arXiv:2501.07430}, year={ 2025 } }