CLIMB: Data Foundations for Large Scale Multimodal Clinical Foundation Models

Recent advances in clinical AI have enabled remarkable progress across many clinical domains. However, existing benchmarks and models are primarily limited to a small set of modalities and tasks, which hinders the development of large-scale multimodal methods that can make holistic assessments of patient health and well-being. To bridge this gap, we introduce Clinical Large-Scale Integrative Multimodal Benchmark (CLIMB), a comprehensive clinical benchmark unifying diverse clinical data across imaging, language, temporal, and graph modalities. CLIMB comprises 4.51 million patient samples totaling 19.01 terabytes distributed across 2D imaging, 3D video, time series, graphs, and multimodal data. Through extensive empirical evaluation, we demonstrate that multitask pretraining significantly improves performance on understudied domains, achieving up to 29% improvement in ultrasound and 23% in ECG analysis over single-task learning. Pretraining on CLIMB also effectively improves models' generalization capability to new tasks, and strong unimodal encoder performance translates well to multimodal performance when paired with task-appropriate fusion strategies. Our findings provide a foundation for new architecture designs and pretraining strategies to advance clinical AI research. Code is released atthis https URL.
View on arXiv@article{dai2025_2503.07667, title={ CLIMB: Data Foundations for Large Scale Multimodal Clinical Foundation Models }, author={ Wei Dai and Peilin Chen and Malinda Lu and Daniel Li and Haowen Wei and Hejie Cui and Paul Pu Liang }, journal={arXiv preprint arXiv:2503.07667}, year={ 2025 } }