Endo-CLIP: Progressive Self-Supervised Pre-training on Raw Colonoscopy Records

Pre-training on image-text colonoscopy records offers substantial potential for improving endoscopic image analysis, but faces challenges including non-informative background images, complex medical terminology, and ambiguous multi-lesion descriptions. We introduce Endo-CLIP, a novel self-supervised framework that enhances Contrastive Language-Image Pre-training (CLIP) for this domain. Endo-CLIP's three-stage framework--cleansing, attunement, and unification--addresses these challenges by (1) removing background frames, (2) leveraging large language models to extract clinical attributes for fine-grained contrastive learning, and (3) employing patient-level cross-attention to resolve multi-polyp ambiguities. Extensive experiments demonstrate that Endo-CLIP significantly outperforms state-of-the-art pre-training methods in zero-shot and few-shot polyp detection and classification, paving the way for more accurate and clinically relevant endoscopic analysis.
View on arXiv@article{he2025_2505.09435, title={ Endo-CLIP: Progressive Self-Supervised Pre-training on Raw Colonoscopy Records }, author={ Yili He and Yan Zhu and Peiyao Fu and Ruijie Yang and Tianyi Chen and Zhihua Wang and Quanlin Li and Pinghong Zhou and Xian Yang and Shuo Wang }, journal={arXiv preprint arXiv:2505.09435}, year={ 2025 } }