Sparse dictionary learning (and, in particular, sparse autoencoders) attempts to learn a set of human-understandable concepts that can explain variation on an abstract space. A basic limitation of this approach is that it neither exploits nor represents the semantic relationships between the learned concepts. In this paper, we introduce a modified SAE architecture that explicitly models a semantic hierarchy of concepts. Application of this architecture to the internal representations of large language models shows both that semantic hierarchy can be learned, and that doing so improves both reconstruction and interpretability. Additionally, the architecture leads to significant improvements in computational efficiency.
View on arXiv@article{muchane2025_2506.01197, title={ Incorporating Hierarchical Semantics in Sparse Autoencoder Architectures }, author={ Mark Muchane and Sean Richardson and Kiho Park and Victor Veitch }, journal={arXiv preprint arXiv:2506.01197}, year={ 2025 } }