25
0

Recent Advances in Out-of-Distribution Detection with CLIP-Like Models: A Survey

Abstract

Out-of-distribution detection (OOD) is a pivotal task for real-world applications that trains models to identify samples that are distributionally different from the in-distribution (ID) data during testing. Recent advances in AI, particularly Vision-Language Models (VLMs) like CLIP, have revolutionized OOD detection by shifting from traditional unimodal image detectors to multimodal image-text detectors. This shift has inspired extensive research; however, existing categorization schemes (e.g., few- or zero-shot types) still rely solely on the availability of ID images, adhering to a unimodal paradigm. To better align with CLIP's cross-modal nature, we propose a new categorization framework rooted in both image and text modalities. Specifically, we categorize existing methods based on how visual and textual information of OOD data is utilized within image + text modalities, and further divide them into four groups: OOD Images (i.e., outliers) Seen or Unseen, and OOD Texts (i.e., learnable vectors or class names) Known or Unknown, across two training strategies (i.e., train-free or training-required). More importantly, we discuss open problems in CLIP-like OOD detection and highlight promising directions for future research, including cross-domain integration, practical applications, and theoretical understanding.

View on arXiv
@article{li2025_2505.02448,
  title={ Recent Advances in Out-of-Distribution Detection with CLIP-Like Models: A Survey },
  author={ Chaohua Li and Enhao Zhang and Chuanxing Geng and Songcan Chen },
  journal={arXiv preprint arXiv:2505.02448},
  year={ 2025 }
}
Comments on this paper