27
0

X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP

Abstract

As Contrastive Language-Image Pre-training (CLIP) models are increasingly adopted for diverse downstream tasks and integrated into large vision-language models (VLMs), their susceptibility to adversarial perturbations has emerged as a critical concern. In this work, we introduce \textbf{X-Transfer}, a novel attack method that exposes a universal adversarial vulnerability in CLIP. X-Transfer generates a Universal Adversarial Perturbation (UAP) capable of deceiving various CLIP encoders and downstream VLMs across different samples, tasks, and domains. We refer to this property as \textbf{super transferability}--a single perturbation achieving cross-data, cross-domain, cross-model, and cross-task adversarial transferability simultaneously. This is achieved through \textbf{surrogate scaling}, a key innovation of our approach. Unlike existing methods that rely on fixed surrogate models, which are computationally intensive to scale, X-Transfer employs an efficient surrogate scaling strategy that dynamically selects a small subset of suitable surrogates from a large search space. Extensive evaluations demonstrate that X-Transfer significantly outperforms previous state-of-the-art UAP methods, establishing a new benchmark for adversarial transferability across CLIP models. The code is publicly available in our \href{this https URL}{GitHub repository}.

View on arXiv
@article{huang2025_2505.05528,
  title={ X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP },
  author={ Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey },
  journal={arXiv preprint arXiv:2505.05528},
  year={ 2025 }
}
Comments on this paper