MSCI: Addressing CLIP's Inherent Limitations for Compositional Zero-Shot Learning

Compositional Zero-Shot Learning (CZSL) aims to recognize unseen state-object combinations by leveraging known combinations. Existing studies basically rely on the cross-modal alignment capabilities of CLIP but tend to overlook its limitations in capturing fine-grained local features, which arise from its architectural and training paradigm. To address this issue, we propose a Multi-Stage Cross-modal Interaction (MSCI) model that effectively explores and utilizes intermediate-layer information from CLIP's visual encoder. Specifically, we design two self-adaptive aggregators to extract local information from low-level visual features and integrate global information from high-level visual features, respectively. These key information are progressively incorporated into textual representations through a stage-by-stage interaction mechanism, significantly enhancing the model's perception capability for fine-grained local visual information. Additionally, MSCI dynamically adjusts the attention weights between global and local visual information based on different combinations, as well as different elements within the same combination, allowing it to flexibly adapt to diverse scenarios. Experiments on three widely used datasets fully validate the effectiveness and superiority of the proposed model. Data and code are available atthis https URL.
View on arXiv@article{wang2025_2505.10289, title={ MSCI: Addressing CLIP's Inherent Limitations for Compositional Zero-Shot Learning }, author={ Yue Wang and Shuai Xu and Xuelin Zhu and Yicong Li }, journal={arXiv preprint arXiv:2505.10289}, year={ 2025 } }