ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.05239
93
0

Evaluating Sparse Autoencoders: From Shallow Design to Matching Pursuit

5 June 2025
Valérie Costa
Thomas Fel
Ekdeep Singh Lubana
Bahareh Tolooshams
Demba Ba
ArXiv (abs)PDFHTML
Abstract

Sparse autoencoders (SAEs) have recently become central tools for interpretability, leveraging dictionary learning principles to extract sparse, interpretable features from neural representations whose underlying structure is typically unknown. This paper evaluates SAEs in a controlled setting using MNIST, which reveals that current shallow architectures implicitly rely on a quasi-orthogonality assumption that limits the ability to extract correlated features. To move beyond this, we introduce a multi-iteration SAE by unrolling Matching Pursuit (MP-SAE), enabling the residual-guided extraction of correlated features that arise in hierarchical settings such as handwritten digit generation while guaranteeing monotonic improvement of the reconstruction as more atoms are selected.

View on arXiv
@article{costa2025_2506.05239,
  title={ Evaluating Sparse Autoencoders: From Shallow Design to Matching Pursuit },
  author={ Valérie Costa and Thomas Fel and Ekdeep Singh Lubana and Bahareh Tolooshams and Demba Ba },
  journal={arXiv preprint arXiv:2506.05239},
  year={ 2025 }
}
Comments on this paper