ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.11750
56
0

Making Every Step Effective: Jailbreaking Large Vision-Language Models Through Hierarchical KV Equalization

14 March 2025
Shuyang Hao
Yiwei Wang
Bryan Hooi
J. Liu
Muhao Chen
Zi Huang
Yujun Cai
    AAML
    VLM
ArXivPDFHTML
Abstract

In the realm of large vision-language models (LVLMs), adversarial jailbreak attacks serve as a red-teaming approach to identify safety vulnerabilities of these models and their associated defense mechanisms. However, we identify a critical limitation: not every adversarial optimization step leads to a positive outcome, and indiscriminately accepting optimization results at each step may reduce the overall attack success rate. To address this challenge, we introduce HKVE (Hierarchical Key-Value Equalization), an innovative jailbreaking framework that selectively accepts gradient optimization results based on the distribution of attention scores across different layers, ensuring that every optimization step positively contributes to the attack. Extensive experiments demonstrate HKVE's significant effectiveness, achieving attack success rates of 75.08% on MiniGPT4, 85.84% on LLaVA and 81.00% on Qwen-VL, substantially outperforming existing methods by margins of 20.43\%, 21.01\% and 26.43\% respectively. Furthermore, making every step effective not only leads to an increase in attack success rate but also allows for a reduction in the number of iterations, thereby lowering computational costs. Warning: This paper contains potentially harmful example data.

View on arXiv
@article{hao2025_2503.11750,
  title={ Making Every Step Effective: Jailbreaking Large Vision-Language Models Through Hierarchical KV Equalization },
  author={ Shuyang Hao and Yiwei Wang and Bryan Hooi and Jun Liu and Muhao Chen and Zi Huang and Yujun Cai },
  journal={arXiv preprint arXiv:2503.11750},
  year={ 2025 }
}
Comments on this paper