28
0

HumorReject: Decoupling LLM Safety from Refusal Prefix via A Little Humor

Abstract

Large Language Models (LLMs) commonly rely on explicit refusal prefixes for safety, making them vulnerable to prefix injection attacks. We introduce HumorReject, a novel data-driven approach that reimagines LLM safety by decoupling it from refusal prefixes through humor as an indirect refusal strategy. Rather than explicitly rejecting harmful instructions, HumorReject responds with contextually appropriate humor that naturally defuses potentially dangerous requests. Our approach effectively addresses common "over-defense" issues while demonstrating superior robustness against various attack vectors. Our findings suggest that improvements in training data design can be as important as the alignment algorithm itself in achieving effective LLM safety.

View on arXiv
@article{wu2025_2501.13677,
  title={ HumorReject: Decoupling LLM Safety from Refusal Prefix via A Little Humor },
  author={ Zihui Wu and Haichang Gao and Jiacheng Luo and Zhaoxiang Liu },
  journal={arXiv preprint arXiv:2501.13677},
  year={ 2025 }
}
Comments on this paper