A Multimodal Knowledge-enhanced Whole-slide Pathology Foundation Model

Remarkable strides in computational pathology have been made in the task-agnostic foundation model that advances the performance of a wide array of downstream clinical tasks. Despite the promising performance, there are still several challenges. First, prior works have resorted to either vision-only or image-caption data, disregarding pathology reports with more clinically authentic information from pathologists and gene expression profiles which respectively offer distinct knowledge for versatile clinical applications. Second, the current progress in pathology FMs predominantly concentrates on the patch level, where the restricted context of patch-level pretraining fails to capture whole-slide patterns. Even recent slide-level FMs still struggle to provide whole-slide context for patch representation. In this study, for the first time, we develop a pathology foundation model incorporating three levels of modalities: pathology slides, pathology reports, and gene expression data, which resulted in 26,169 slide-level modality pairs from 10,275 patients across 32 cancer types, amounting to over 116 million pathological patch images. To leverage these data for CPath, we propose a novel whole-slide pretraining paradigm that injects the multimodal whole-slide context into the patch representation, called Multimodal Self-TAught PRetraining (mSTAR). The proposed paradigm revolutionizes the pretraining workflow for CPath, enabling the pathology FM to acquire the whole-slide context. To the best of our knowledge, this is the first attempt to incorporate three modalities at the whole-slide context for enhancing pathology FMs. To systematically evaluate the capabilities of mSTAR, we built the largest spectrum of oncological benchmark, spanning 7 categories of oncological applications in 15 types of 97 practical oncological tasks.
View on arXiv@article{xu2025_2407.15362, title={ A Multimodal Knowledge-enhanced Whole-slide Pathology Foundation Model }, author={ Yingxue Xu and Yihui Wang and Fengtao Zhou and Jiabo Ma and Cheng Jin and Shu Yang and Jinbang Li and Zhengyu Zhang and Chenglong Zhao and Huajun Zhou and Zhenhui Li and Huangjing Lin and Xin Wang and Jiguang Wang and Anjia Han and Ronald Cheong Kin Chan and Li Liang and Xiuming Zhang and Hao Chen }, journal={arXiv preprint arXiv:2407.15362}, year={ 2025 } }