79
0

Iterative Prompting with Persuasion Skills in Jailbreaking Large Language Models

Abstract

Large language models (LLMs) are designed to align with human values in their responses. This study exploits LLMs with an iterative prompting technique where each prompt is systematically modified and refined across multiple iterations to enhance its effectiveness in jailbreaking attacks progressively. This technique involves analyzing the response patterns of LLMs, including GPT-3.5, GPT-4, LLaMa2, Vicuna, and ChatGLM, allowing us to adjust and optimize prompts to evade the LLMs' ethical and security constraints. Persuasion strategies enhance prompt effectiveness while maintaining consistency with malicious intent. Our results show that the attack success rates (ASR) increase as the attacking prompts become more refined with the highest ASR of 90% for GPT4 and ChatGLM and the lowest ASR of 68% for LLaMa2. Our technique outperforms baseline techniques (PAIR and PAP) in ASR and shows comparable performance with GCG and ArtPrompt.

View on arXiv
@article{ke2025_2503.20320,
  title={ Iterative Prompting with Persuasion Skills in Jailbreaking Large Language Models },
  author={ Shih-Wen Ke and Guan-Yu Lai and Guo-Lin Fang and Hsi-Yuan Kao },
  journal={arXiv preprint arXiv:2503.20320},
  year={ 2025 }
}
Comments on this paper