65
1

Unsupervised Foundation Model-Agnostic Slide-Level Representation Learning

Abstract

Representation learning of pathology whole-slide images (WSIs) has primarily relied on weak supervision with Multiple Instance Learning (MIL). This approach leads to slide representations highly tailored to a specific clinical task. Self-supervised learning (SSL) has been successfully applied to train histopathology foundation models (FMs) for patch embedding generation. However, generating patient or slide level embeddings remains challenging. Existing approaches for slide representation learning extend the principles of SSL from patch level learning to entire slides by aligning different augmentations of the slide or by utilizing multimodal data. By integrating tile embeddings from multiple FMs, we propose a new single modality SSL method in feature space that generates useful slide representations. Our contrastive pretraining strategy, called COBRA, employs multiple FMs and an architecture based on Mamba-2. COBRA exceeds performance of state-of-the-art slide encoders on four different public Clinical Protemic Tumor Analysis Consortium (CPTAC) cohorts on average by at least +4.4% AUC, despite only being pretrained on 3048 WSIs from The Cancer Genome Atlas (TCGA). Additionally, COBRA is readily compatible at inference time with previously unseen feature extractors. Code available atthis https URL.

View on arXiv
@article{lenz2025_2411.13623,
  title={ Unsupervised Foundation Model-Agnostic Slide-Level Representation Learning },
  author={ Tim Lenz and Peter Neidlinger and Marta Ligero and Georg Wölflein and Marko van Treeck and Jakob Nikolas Kather },
  journal={arXiv preprint arXiv:2411.13623},
  year={ 2025 }
}
Comments on this paper