71
0

Making Them a Malicious Database: Exploiting Query Code to Jailbreak Aligned Large Language Models

Abstract

Recent advances in large language models (LLMs) have demonstrated remarkable potential in the field of natural language processing. Unfortunately, LLMs face significant security and ethical risks. Although techniques such as safety alignment are developed for defense, prior researches reveal the possibility of bypassing such defenses through well-designed jailbreak attacks. In this paper, we propose QueryAttack, a novel framework to examine the generalizability of safety alignment. By treating LLMs as knowledge databases, we translate malicious queries in natural language into structured non-natural query language to bypass the safety alignment mechanisms of LLMs. We conduct extensive experiments on mainstream LLMs, and the results show that QueryAttack not only can achieve high attack success rates (ASRs), but also can jailbreak various defense methods. Furthermore, we tailor a defense method against QueryAttack, which can reduce ASR by up to 64% on GPT-4-1106. Our code is available atthis https URL.

View on arXiv
@article{zou2025_2502.09723,
  title={ Making Them a Malicious Database: Exploiting Query Code to Jailbreak Aligned Large Language Models },
  author={ Qingsong Zou and Jingyu Xiao and Qing Li and Zhi Yan and Yuhang Wang and Li Xu and Wenxuan Wang and Kuofeng Gao and Ruoyu Li and Yong Jiang },
  journal={arXiv preprint arXiv:2502.09723},
  year={ 2025 }
}
Comments on this paper