ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.11647
43
0

DELMAN: Dynamic Defense Against Large Language Model Jailbreaking with Model Editing

17 February 2025
Yi Wang
Fenghua Weng
S. Yang
Zhan Qin
Minlie Huang
Wenjie Wang
    KELM
    AAML
ArXivPDFHTML
Abstract

Large Language Models (LLMs) are widely applied in decision making, but their deployment is threatened by jailbreak attacks, where adversarial users manipulate model behavior to bypass safety measures. Existing defense mechanisms, such as safety fine-tuning and model editing, either require extensive parameter modifications or lack precision, leading to performance degradation on general tasks, which is unsuitable to post-deployment safety alignment. To address these challenges, we propose DELMAN (Dynamic Editing for LLMs JAilbreak DefeNse), a novel approach leveraging direct model editing for precise, dynamic protection against jailbreak attacks. DELMAN directly updates a minimal set of relevant parameters to neutralize harmful behaviors while preserving the model's utility. To avoid triggering a safe response in benign context, we incorporate KL-divergence regularization to ensure the updated model remains consistent with the original model when processing benign queries. Experimental results demonstrate that DELMAN outperforms baseline methods in mitigating jailbreak attacks while preserving the model's utility, and adapts seamlessly to new attack instances, providing a practical and efficient solution for post-deployment model protection.

View on arXiv
@article{wang2025_2502.11647,
  title={ DELMAN: Dynamic Defense Against Large Language Model Jailbreaking with Model Editing },
  author={ Yi Wang and Fenghua Weng and Sibei Yang and Zhan Qin and Minlie Huang and Wenjie Wang },
  journal={arXiv preprint arXiv:2502.11647},
  year={ 2025 }
}
Comments on this paper