77

Controllable Value Alignment in Large Language Models through Neuron-Level Editing

Yonghui Yang
Junwei Li
Jilong Liu
Yicheng He
Fengbin Zhu
Weibiao Huang
Le Wu
Richang Hong
Tat-Seng Chua
Main:7 Pages
12 Figures
Bibliography:3 Pages
7 Tables
Appendix:10 Pages
Abstract

Aligning large language models (LLMs) with human values has become increasingly important as their influence on human behavior and decision-making expands. However, existing steering-based alignment methods suffer from limited controllability: steering a target value often unintentionally activates other, non-target values. To characterize this limitation, we introduce value leakage, a diagnostic notion that captures the unintended activation of non-target values during value steering, along with a normalized leakage metric grounded in Schwartz's value theory. In light of this analysis, we propose NeVA, a neuron-level editing framework for controllable value alignment in LLMs. NeVA identifies sparse, value-relevant neurons and performs inference-time activation editing, enabling fine-grained control without parameter updates or retraining. Experiments show that NeVA achieves stronger target value alignment while incurring smaller performance degradation on general capability. Moreover, NeVA significantly reduces the average leakage, with residual effects largely confined to semantically related value classes. Overall, NeVA offers a more controllable and interpretable mechanism for value alignment.

View on arXiv
Comments on this paper