Towards Training-free Anomaly Detection with Vision and Language Foundation Models

Anomaly detection is valuable for real-world applications, such as industrial quality inspection. However, most approaches focus on detecting local structural anomalies while neglecting compositional anomalies incorporating logical constraints. In this paper, we introduce LogSAD, a novel multi-modal framework that requires no training for both Logical and Structural Anomaly Detection. First, we propose a match-of-thought architecture that employs advanced large multi-modal models (i.e. GPT-4V) to generate matching proposals, formulating interests and compositional rules of thought for anomaly detection. Second, we elaborate on multi-granularity anomaly detection, consisting of patch tokens, sets of interests, and composition matching with vision and language foundation models. Subsequently, we present a calibration module to align anomaly scores from different detectors, followed by integration strategies for the final decision. Consequently, our approach addresses both logical and structural anomaly detection within a unified framework and achieves state-of-the-art results without the need for training, even when compared to supervised approaches, highlighting its robustness and effectiveness. Code is available atthis https URL.
View on arXiv@article{zhang2025_2503.18325, title={ Towards Training-free Anomaly Detection with Vision and Language Foundation Models }, author={ Jinjin Zhang and Guodong Wang and Yizhou Jin and Di Huang }, journal={arXiv preprint arXiv:2503.18325}, year={ 2025 } }