45
2

Dynamic Guided and Domain Applicable Safeguards for Enhanced Security in Large Language Models

He Cao
Weidi Luo
Zijing Liu
Yu Wang
Bing Feng
Yuan Yao
Yuan Yao
Yu Li
Abstract

With the extensive deployment of Large Language Models (LLMs), ensuring their safety has become increasingly critical. However, existing defense methods often struggle with two key issues: (i) inadequate defense capabilities, particularly in domain-specific scenarios like chemistry, where a lack of specialized knowledge can lead to the generation of harmful responses to malicious queries. (ii) over-defensiveness, which compromises the general utility and responsiveness of LLMs. To mitigate these issues, we introduce a multi-agents-based defense framework, Guide for Defense (G4D), which leverages accurate external information to provide an unbiased summary of user intentions and analytically grounded safety response guidance. Extensive experiments on popular jailbreak attacks and benign datasets show that our G4D can enhance LLM's robustness against jailbreak attacks on general and domain-specific scenarios without compromising the model's general functionality.

View on arXiv
@article{luo2025_2410.17922,
  title={ Dynamic Guided and Domain Applicable Safeguards for Enhanced Security in Large Language Models },
  author={ Weidi Luo and He Cao and Zijing Liu and Yu Wang and Aidan Wong and Bing Feng and Yuan Yao and Yu Li },
  journal={arXiv preprint arXiv:2410.17922},
  year={ 2025 }
}
Comments on this paper