Round Trip Translation Defence against Large Language Model Jailbreaking Attacks

Large language models (LLMs) are susceptible to social-engineered attacks that are human-interpretable but require a high level of comprehension for LLMs to counteract. Existing defensive measures can only mitigate less than half of these attacks at most. To address this issue, we propose the Round Trip Translation (RTT) method, the first algorithm specifically designed to defend against social-engineered attacks on LLMs. RTT paraphrases the adversarial prompt and generalizes the idea conveyed, making it easier for LLMs to detect induced harmful behavior. This method is versatile, lightweight, and transferrable to different LLMs. Our defense successfully mitigated over 70% of Prompt Automatic Iterative Refinement (PAIR) attacks, which is currently the most effective defense to the best of our knowledge. We are also the first to attempt mitigating the MathsAttack and reduced its attack success rate by almost 40%. Our code is publicly available atthis https URLThis version of the article has been accepted for publication, after peer review (when applicable) but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at:this https URLUse of this Accepted Version is subject to the publisher's Accepted Manuscript terms of usethis https URL
View on arXiv@article{yung2025_2402.13517, title={ Round Trip Translation Defence against Large Language Model Jailbreaking Attacks }, author={ Canaan Yung and Hadi Mohaghegh Dolatabadi and Sarah Erfani and Christopher Leckie }, journal={arXiv preprint arXiv:2402.13517}, year={ 2025 } }