ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.01294
43
5

Endless Jailbreaks with Bijection Learning

2 October 2024
Brian R. Y. Huang
Maximilian Li
Leonard Tang
    AAML
ArXivPDFHTML
Abstract

Despite extensive safety measures, LLMs are vulnerable to adversarial inputs, or jailbreaks, which can elicit unsafe behaviors. In this work, we introduce bijection learning, a powerful attack algorithm which automatically fuzzes LLMs for safety vulnerabilities using randomly-generated encodings whose complexity can be tightly controlled. We leverage in-context learning to teach models bijective encodings, pass encoded queries to the model to bypass built-in safety mechanisms, and finally decode responses back into English. Our attack is extremely effective on a wide range of frontier language models. Moreover, by controlling complexity parameters such as number of key-value mappings in the encodings, we find a close relationship between the capability level of the attacked LLM and the average complexity of the most effective bijection attacks. Our work highlights that new vulnerabilities in frontier models can emerge with scale: more capable models are more severely jailbroken by bijection attacks.

View on arXiv
@article{huang2025_2410.01294,
  title={ Endless Jailbreaks with Bijection Learning },
  author={ Brian R.Y. Huang and Maximilian Li and Leonard Tang },
  journal={arXiv preprint arXiv:2410.01294},
  year={ 2025 }
}
Comments on this paper