ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.01197
36
0

Incorporating Hierarchical Semantics in Sparse Autoencoder Architectures

1 June 2025
Mark Muchane
Sean Richardson
Kiho Park
Victor Veitch
ArXiv (abs)PDFHTML
Main:10 Pages
24 Figures
Bibliography:1 Pages
Appendix:15 Pages
Abstract

Sparse dictionary learning (and, in particular, sparse autoencoders) attempts to learn a set of human-understandable concepts that can explain variation on an abstract space. A basic limitation of this approach is that it neither exploits nor represents the semantic relationships between the learned concepts. In this paper, we introduce a modified SAE architecture that explicitly models a semantic hierarchy of concepts. Application of this architecture to the internal representations of large language models shows both that semantic hierarchy can be learned, and that doing so improves both reconstruction and interpretability. Additionally, the architecture leads to significant improvements in computational efficiency.

View on arXiv
@article{muchane2025_2506.01197,
  title={ Incorporating Hierarchical Semantics in Sparse Autoencoder Architectures },
  author={ Mark Muchane and Sean Richardson and Kiho Park and Victor Veitch },
  journal={arXiv preprint arXiv:2506.01197},
  year={ 2025 }
}
Comments on this paper