ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.17425
61
0

Debiasing CLIP: Interpreting and Correcting Bias in Attention Heads

23 May 2025
Wei Jie Yeo
Rui Mao
Moloud Abdar
Erik Cambria
Ranjan Satapathy
ArXivPDFHTML
Abstract

Multimodal models like CLIP have gained significant attention due to their remarkable zero-shot performance across various tasks. However, studies have revealed that CLIP can inadvertently learn spurious associations between target variables and confounding factors. To address this, we introduce \textsc{Locate-Then-Correct} (LTC), a contrastive framework that identifies spurious attention heads in Vision Transformers via mechanistic insights and mitigates them through targeted ablation. Furthermore, LTC identifies salient, task-relevant attention heads, enabling the integration of discriminative features through orthogonal projection to improve classification performance. We evaluate LTC on benchmarks with inherent background and gender biases, achieving over a >50%>50\%>50% gain in worst-group accuracy compared to non-training post-hoc baselines. Additionally, we visualize the representation of selected heads and find that the presented interpretation corroborates our contrastive mechanism for identifying both spurious and salient attention heads. Code available atthis https URL.

View on arXiv
@article{yeo2025_2505.17425,
  title={ Debiasing CLIP: Interpreting and Correcting Bias in Attention Heads },
  author={ Wei Jie Yeo and Rui Mao and Moloud Abdar and Erik Cambria and Ranjan Satapathy },
  journal={arXiv preprint arXiv:2505.17425},
  year={ 2025 }
}
Comments on this paper