ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24473
46
0
v1v2 (latest)

Train One Sparse Autoencoder Across Multiple Sparsity Budgets to Preserve Interpretability and Accuracy

30 May 2025
Nikita Balagansky
Yaroslav Aksenov
Daniil Laptev
Vadim Kurochkin
Gleb Gerasimov
Nikita Koryagin
Daniil Gavrilov
ArXiv (abs)PDFHTML
Main:3 Pages
9 Figures
Bibliography:2 Pages
2 Tables
Appendix:3 Pages
Abstract

Sparse Autoencoders (SAEs) have proven to be powerful tools for interpreting neural networks by decomposing hidden representations into disentangled, interpretable features via sparsity constraints. However, conventional SAEs are constrained by the fixed sparsity level chosen during training; meeting different sparsity requirements therefore demands separate models and increases the computational footprint during both training and evaluation. We introduce a novel training objective, \emph{HierarchicalTopK}, which trains a single SAE to optimise reconstructions across multiple sparsity levels simultaneously. Experiments with Gemma-2 2B demonstrate that our approach achieves Pareto-optimal trade-offs between sparsity and explained variance, outperforming traditional SAEs trained at individual sparsity levels. Further analysis shows that HierarchicalTopK preserves high interpretability scores even at higher sparsity. The proposed objective thus closes an important gap between flexibility and interpretability in SAE design.

View on arXiv
@article{balagansky2025_2505.24473,
  title={ Train One Sparse Autoencoder Across Multiple Sparsity Budgets to Preserve Interpretability and Accuracy },
  author={ Nikita Balagansky and Yaroslav Aksenov and Daniil Laptev and Vadim Kurochkin and Gleb Gerasimov and Nikita Koryagin and Daniil Gavrilov },
  journal={arXiv preprint arXiv:2505.24473},
  year={ 2025 }
}
Comments on this paper