Multimodal models like CLIP have gained significant attention due to their remarkable zero-shot performance across various tasks. However, studies have revealed that CLIP can inadvertently learn spurious associations between target variables and confounding factors. To address this, we introduce \textsc{Locate-Then-Correct} (LTC), a contrastive framework that identifies spurious attention heads in Vision Transformers via mechanistic insights and mitigates them through targeted ablation. Furthermore, LTC identifies salient, task-relevant attention heads, enabling the integration of discriminative features through orthogonal projection to improve classification performance. We evaluate LTC on benchmarks with inherent background and gender biases, achieving over a gain in worst-group accuracy compared to non-training post-hoc baselines. Additionally, we visualize the representation of selected heads and find that the presented interpretation corroborates our contrastive mechanism for identifying both spurious and salient attention heads. Code available atthis https URL.
View on arXiv@article{yeo2025_2505.17425, title={ Debiasing CLIP: Interpreting and Correcting Bias in Attention Heads }, author={ Wei Jie Yeo and Rui Mao and Moloud Abdar and Erik Cambria and Ranjan Satapathy }, journal={arXiv preprint arXiv:2505.17425}, year={ 2025 } }