ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.10694
18
0

The Jailbreak Tax: How Useful are Your Jailbreak Outputs?

14 April 2025
Kristina Nikolić
Luze Sun
Jie Zhang
F. Tramèr
ArXivPDFHTML
Abstract

Jailbreak attacks bypass the guardrails of large language models to produce harmful outputs. In this paper, we ask whether the model outputs produced by existing jailbreaks are actually useful. For example, when jailbreaking a model to give instructions for building a bomb, does the jailbreak yield good instructions? Since the utility of most unsafe answers (e.g., bomb instructions) is hard to evaluate rigorously, we build new jailbreak evaluation sets with known ground truth answers, by aligning models to refuse questions related to benign and easy-to-evaluate topics (e.g., biology or math). Our evaluation of eight representative jailbreaks across five utility benchmarks reveals a consistent drop in model utility in jailbroken responses, which we term the jailbreak tax. For example, while all jailbreaks we tested bypass guardrails in models aligned to refuse to answer math, this comes at the expense of a drop of up to 92% in accuracy. Overall, our work proposes the jailbreak tax as a new important metric in AI safety, and introduces benchmarks to evaluate existing and future jailbreaks. We make the benchmark available atthis https URL

View on arXiv
@article{nikolić2025_2504.10694,
  title={ The Jailbreak Tax: How Useful are Your Jailbreak Outputs? },
  author={ Kristina Nikolić and Luze Sun and Jie Zhang and Florian Tramèr },
  journal={arXiv preprint arXiv:2504.10694},
  year={ 2025 }
}
Comments on this paper