SAIF: A Sparse Autoencoder Framework for Interpreting and Steering Instruction Following of Language Models

The ability of large language models (LLMs) to follow instructions is crucial for their practical applications, yet the underlying mechanisms remain poorly understood. This paper presents a novel framework that leverages sparse autoencoders (SAE) to interpret how instruction following works in these models. We demonstrate how the features we identify can effectively steer model outputs to align with given instructions. Through analysis of SAE latent activations, we identify specific latents responsible for instruction following behavior. Our findings reveal that instruction following capabilities are encoded by a distinct set of instruction-relevant SAE latents. These latents both show semantic proximity to relevant instructions and demonstrate causal effects on model behavior. Our research highlights several crucial factors for achieving effective steering performance: precise feature identification, the role of final layer, and optimal instruction positioning. Additionally, we demonstrate that our methodology scales effectively across SAEs and LLMs of varying sizes.
View on arXiv@article{he2025_2502.11356, title={ SAIF: A Sparse Autoencoder Framework for Interpreting and Steering Instruction Following of Language Models }, author={ Zirui He and Haiyan Zhao and Yiran Qiao and Fan Yang and Ali Payani and Jing Ma and Mengnan Du }, journal={arXiv preprint arXiv:2502.11356}, year={ 2025 } }