18
0

Decoding Dense Embeddings: Sparse Autoencoders for Interpreting and Discretizing Dense Retrieval

Main:7 Pages
9 Figures
Bibliography:3 Pages
11 Tables
Appendix:6 Pages
Abstract

Despite their strong performance, Dense Passage Retrieval (DPR) models suffer from a lack of interpretability. In this work, we propose a novel interpretability framework that leverages Sparse Autoencoders (SAEs) to decompose previously uninterpretable dense embeddings from DPR models into distinct, interpretable latent concepts. We generate natural language descriptions for each latent concept, enabling human interpretations of both the dense embeddings and the query-document similarity scores of DPR models. We further introduce Concept-Level Sparse Retrieval (CL-SR), a retrieval framework that directly utilizes the extracted latent concepts as indexing units. CL-SR effectively combines the semantic expressiveness of dense embeddings with the transparency and efficiency of sparse representations. We show that CL-SR achieves high index-space and computational efficiency while maintaining robust performance across vocabulary and semantic mismatches.

View on arXiv
@article{park2025_2506.00041,
  title={ Decoding Dense Embeddings: Sparse Autoencoders for Interpreting and Discretizing Dense Retrieval },
  author={ Seongwan Park and Taeklim Kim and Youngjoong Ko },
  journal={arXiv preprint arXiv:2506.00041},
  year={ 2025 }
}
Comments on this paper