10
0

SAFER: Probing Safety in Reward Models with Sparse Autoencoder

Sihang Li
Wei Shi
Ziyuan Xie
Tao Liang
Guojun Ma
Xiang Wang
Main:9 Pages
8 Figures
Bibliography:4 Pages
4 Tables
Appendix:5 Pages
Abstract

Reinforcement learning from human feedback (RLHF) is a key paradigm for aligning large language models (LLMs) with human values, yet the reward models at its core remain largely opaque. In this work, we present sparse Autoencoder For Enhanced Reward model (\textbf{SAFER}), a novel framework for interpreting and improving reward models through mechanistic analysis. Leveraging Sparse Autoencoders (SAEs), we uncover human-interpretable features in reward model activations, enabling insight into safety-relevant decision-making. We apply SAFER to safety-oriented preference datasets and quantify the salience of individual features by activation differences between chosen and rejected responses. Using these feature-level signals, we design targeted data poisoning and denoising strategies. Experiments show that SAFER can precisely degrade or enhance safety alignment with minimal data modification, without sacrificing general chat performance. Our approach contributes to interpreting, auditing and refining reward models in high-stakes LLM alignment tasks. Our codes are available atthis https URL. \textit{This paper discusses topics related to large language model safety and may include discussions or examples that highlight potential risks or unsafe outcomes.}

View on arXiv
@article{li2025_2507.00665,
  title={ SAFER: Probing Safety in Reward Models with Sparse Autoencoder },
  author={ Sihang Li and Wei Shi and Ziyuan Xie and Tao Liang and Guojun Ma and Xiang Wang },
  journal={arXiv preprint arXiv:2507.00665},
  year={ 2025 }
}
Comments on this paper