4

LLM-VA: Resolving the Jailbreak-Overrefusal Trade-off via Vector Alignment

Haonan Zhang
Dongxia Wang
Yi Liu
Kexin Chen
Wenhai Wang
Main:8 Pages
12 Figures
Bibliography:3 Pages
4 Tables
Appendix:5 Pages
Abstract

Safety-aligned LLMs suffer from two failure modes: jailbreak (answering harmful inputs) and over-refusal (declining benign queries). Existing vector steering methods adjust the magnitude of answer vectors, but this creates a fundamental trade-off -- reducing jailbreak increases over-refusal and vice versa. We identify the root cause: LLMs encode the decision to answer (answer vector vav_a) and the judgment of input safety (benign vector vbv_b) as nearly orthogonal directions, treating them as independent processes. We propose LLM-VA, which aligns vav_a with vbv_b through closed-form weight updates, making the model's willingness to answer causally dependent on its safety assessment -- without fine-tuning or architectural changes. Our method identifies vectors at each layer using SVMs, selects safety-relevant layers, and iteratively aligns vectors via minimum-norm weight modifications. Experiments on 12 LLMs demonstrate that LLM-VA achieves 11.45% higher F1 than the best baseline while preserving 95.92% utility, and automatically adapts to each model's safety bias without manual tuning. Code and models are available atthis https URL.

View on arXiv
Comments on this paper