Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment

We present Universal Sparse Autoencoders (USAEs), a framework for uncovering and aligning interpretable concepts spanning multiple pretrained deep neural networks. Unlike existing concept-based interpretability methods, which focus on a single model, USAEs jointly learn a universal concept space that can reconstruct and interpret the internal activations of multiple models at once. Our core insight is to train a single, overcomplete sparse autoencoder (SAE) that ingests activations from any model and decodes them to approximate the activations of any other model under consideration. By optimizing a shared objective, the learned dictionary captures common factors of variation-concepts-across different tasks, architectures, and datasets. We show that USAEs discover semantically coherent and important universal concepts across vision models; ranging from low-level features (e.g., colors and textures) to higher-level structures (e.g., parts and objects). Overall, USAEs provide a powerful new method for interpretable cross-model analysis and offers novel applications, such as coordinated activation maximization, that open avenues for deeper insights in multi-model AI systems
View on arXiv@article{thasarathan2025_2502.03714, title={ Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment }, author={ Harrish Thasarathan and Julian Forsyth and Thomas Fel and Matthew Kowal and Konstantinos Derpanis }, journal={arXiv preprint arXiv:2502.03714}, year={ 2025 } }