22
0

Learning Invariant Causal Mechanism from Vision-Language Models

Abstract

Contrastive Language-Image Pretraining (CLIP) has achieved remarkable success, but its performance can degrade when fine-tuned in out-of-distribution (OOD) scenarios. We model the prediction process using a Structural Causal Model (SCM) and show that the causal mechanism involving both invariant and variant factors in training environments differs from that in test environments. In contrast, the causal mechanism with solely invariant factors remains consistent across environments. We theoretically prove the existence of a linear mapping from CLIP embeddings to invariant factors, which can be estimated using interventional data. Additionally, we provide a condition to guarantee low OOD risk of the invariant predictor. Based on these insights, we propose the Invariant Causal Mechanism of CLIP (CLIP-ICM) framework. CLIP-ICM involves collecting interventional data, estimating a linear projection matrix, and making predictions within the invariant subspace. Experiments on several OOD datasets show that CLIP-ICM significantly improves the performance of CLIP. Our method offers a simple but powerful enhancement, boosting the reliability of CLIP in real-world applications.

View on arXiv
@article{song2025_2405.15289,
  title={ Learning Invariant Causal Mechanism from Vision-Language Models },
  author={ Zeen Song and Siyu Zhao and Xingyu Zhang and Jiangmeng Li and Changwen Zheng and Wenwen Qiang },
  journal={arXiv preprint arXiv:2405.15289},
  year={ 2025 }
}
Comments on this paper