52

Identifying Intervenable and Interpretable Features via Orthogonality Regularization

Moritz Miller
Florent Draye
Bernhard Schölkopf
Main:7 Pages
9 Figures
Appendix:10 Pages
Abstract

With recent progress on fine-tuning language models around a fixed sparse autoencoder, we disentangle the decoder matrix into almost orthogonal features. This reduces interference and superposition between the features, while keeping performance on the target dataset essentially unchanged. Our orthogonality penalty leads to identifiable features, ensuring the uniqueness of the decomposition. Further, we find that the distance between embedded feature explanations increases with stricter orthogonality penalty, a desirable property for interpretability. Invoking the Independent Causal Mechanisms\textit{Independent Causal Mechanisms} principle, we argue that orthogonality promotes modular representations amenable to causal intervention. We empirically show that these increasingly orthogonalized features allow for isolated interventions. Our code is available under this https URL\texttt{this https URL}.

View on arXiv
Comments on this paper