From What to How: Attributing CLIP's Latent Components Reveals Unexpected Semantic Reliance

Transformer-based CLIP models are widely used for text-image probing and feature extraction, making it relevant to understand the internal mechanisms behind their predictions. While recent works show that Sparse Autoencoders (SAEs) yield interpretable latent components, they focus on what these encode and miss how they drive predictions. We introduce a scalable framework that reveals what latent components activate for, how they align with expected semantics, and how important they are to predictions. To achieve this, we adapt attribution patching for instance-wise component attributions in CLIP and highlight key faithfulness limitations of the widely used Logit Lens technique. By combining attributions with semantic alignment scores, we can automatically uncover reliance on components that encode semantically unexpected or spurious concepts. Applied across multiple CLIP variants, our method uncovers hundreds of surprising components linked to polysemous words, compound nouns, visual typography and dataset artifacts. While text embeddings remain prone to semantic ambiguity, they are more robust to spurious correlations compared to linear classifiers trained on image embeddings. A case study on skin lesion detection highlights how such classifiers can amplify hidden shortcuts, underscoring the need for holistic, mechanistic interpretability. We provide code atthis https URL.
View on arXiv@article{dreyer2025_2505.20229, title={ From What to How: Attributing CLIP's Latent Components Reveals Unexpected Semantic Reliance }, author={ Maximilian Dreyer and Lorenz Hufe and Jim Berend and Thomas Wiegand and Sebastian Lapuschkin and Wojciech Samek }, journal={arXiv preprint arXiv:2505.20229}, year={ 2025 } }