14
0

Transferable Adversarial Attacks on Black-Box Vision-Language Models

Abstract

Vision Large Language Models (VLLMs) are increasingly deployed to offer advanced capabilities on inputs comprising both text and images. While prior research has shown that adversarial attacks can transfer from open-source to proprietary black-box models in text-only and vision-only contexts, the extent and effectiveness of such vulnerabilities remain underexplored for VLLMs. We present a comprehensive analysis demonstrating that targeted adversarial examples are highly transferable to widely-used proprietary VLLMs such as GPT-4o, Claude, and Gemini. We show that attackers can craft perturbations to induce specific attacker-chosen interpretations of visual information, such as misinterpreting hazardous content as safe, overlooking sensitive or restricted material, or generating detailed incorrect responses aligned with the attacker's intent. Furthermore, we discover that universal perturbations -- modifications applicable to a wide set of images -- can consistently induce these misinterpretations across multiple proprietary VLLMs. Our experimental results on object recognition, visual question answering, and image captioning show that this vulnerability is common across current state-of-the-art models, and underscore an urgent need for robust mitigations to ensure the safe and secure deployment of VLLMs.

View on arXiv
@article{hu2025_2505.01050,
  title={ Transferable Adversarial Attacks on Black-Box Vision-Language Models },
  author={ Kai Hu and Weichen Yu and Li Zhang and Alexander Robey and Andy Zou and Chengming Xu and Haoqi Hu and Matt Fredrikson },
  journal={arXiv preprint arXiv:2505.01050},
  year={ 2025 }
}
Comments on this paper