ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.11185
49
1

Align in Depth: Defending Jailbreak Attacks via Progressive Answer Detoxification

14 March 2025
Yingjie Zhang
Tong Liu
Zhe Zhao
Guozhu Meng
Kai Chen
    AAML
ArXivPDFHTML
Abstract

Large Language Models (LLMs) are vulnerable to jailbreak attacks, which use crafted prompts to elicit toxic responses. These attacks exploit LLMs' difficulty in dynamically detecting harmful intents during the generation process. Traditional safety alignment methods, often relying on the initial few generation steps, are ineffective due to limited computational budget. This paper proposes DEEPALIGN, a robust defense framework that fine-tunes LLMs to progressively detoxify generated content, significantly improving both the computational budget and effectiveness of mitigating harmful generation. Our approach uses a hybrid loss function operating on hidden states to directly improve LLMs' inherent awareness of toxity during generation. Furthermore, we redefine safe responses by generating semantically relevant answers to harmful queries, thereby increasing robustness against representation-mutation attacks. Evaluations across multiple LLMs demonstrate state-of-the-art defense performance against six different attack types, reducing Attack Success Rates by up to two orders of magnitude compared to previous state-of-the-art defense while preserving utility. This work advances LLM safety by addressing limitations of conventional alignment through dynamic, context-aware mitigation.

View on arXiv
@article{zhang2025_2503.11185,
  title={ Align in Depth: Defending Jailbreak Attacks via Progressive Answer Detoxification },
  author={ Yingjie Zhang and Tong Liu and Zhe Zhao and Guozhu Meng and Kai Chen },
  journal={arXiv preprint arXiv:2503.11185},
  year={ 2025 }
}
Comments on this paper