VIViT: Variable-Input Vision Transformer Framework for 3D MR Image Segmentation

Self-supervised pretrain techniques have been widely used to improve the downstream tasks' performance. However, real-world magnetic resonance (MR) studies usually consist of different sets of contrasts due to different acquisition protocols, which poses challenges for the current deep learning methods on large-scale pretrain and different downstream tasks with different input requirements, since these methods typically require a fixed set of input modalities or, contrasts. To address this challenge, we propose variable-input ViT (VIViT), a transformer-based framework designed for self-supervised pretraining and segmentation finetuning for variable contrasts in each study. With this ability, our approach can maximize the data availability in pretrain, and can transfer the learned knowledge from pretrain to downstream tasks despite variations in input requirements. We validate our method on brain infarct and brain tumor segmentation, where our method outperforms current CNN and ViT-based models with a mean Dice score of 0.624 and 0.883 respectively. These results highlight the efficacy of our design for better adaptability and performance on tasks with real-world heterogeneous MR data.
View on arXiv@article{das2025_2505.08693, title={ VIViT: Variable-Input Vision Transformer Framework for 3D MR Image Segmentation }, author={ Badhan Kumar Das and Ajay Singh and Gengyan Zhao and Han Liu and Thomas J. Re and Dorin Comaniciu and Eli Gibson and Andreas Maier }, journal={arXiv preprint arXiv:2505.08693}, year={ 2025 } }