20
v1v2 (latest)

Why Steering Works: Toward a Unified View of Language Model Parameter Dynamics

Ziwen Xu
Chenyan Wu
Hengyu Sun
Haiwen Hong
Mengru Wang
Yunzhi Yao
Longtao Huang
Hui Xue
Shumin Deng
Zhixuan Chu
Huajun Chen
Ningyu Zhang
Main:8 Pages
5 Figures
Bibliography:5 Pages
6 Tables
Appendix:6 Pages
Abstract

Methods for controlling large language models (LLMs), including local weight fine-tuning, LoRA-based adaptation, and activation-based interventions, are often studied in isolation, obscuring their connections and making comparison difficult. In this work, we present a unified view that frames these interventions as dynamic weight updates induced by a control signal, placing them within a single conceptual framework. Building on this view, we propose a unified preference-utility analysis that separates control effects into preference, defined as the tendency toward a target concept, and utility, defined as coherent and task-valid generation, and measures both on a shared log-odds scale using polarity-paired contrastive examples. Across methods, we observe a consistent trade-off between preference and utility: stronger control increases preference while predictably reducing utility. We further explain this behavior through an activation manifold perspective, in which control shifts representations along target-concept directions to enhance preference, while utility declines primarily when interventions push representations off the model's valid-generation manifold. Finally, we introduce a new steering approach SPLIT guided by this analysis that improves preference while better preserving utility. Code is available atthis https URL.

View on arXiv
Comments on this paper