36
7

Word Equations: Inherently Interpretable Sparse Word Embeddingsthrough Sparse Coding

Abstract

Word embeddings are a powerful natural lan-guage processing technique, but they are ex-tremely difficult to interpret. To enable inter-pretable NLP models, we create vectors whereeach dimension isinherently interpretable. Byinherently interpretable, we mean a systemwhere each dimension is associated with somehuman-understandablehintthat can describethe meaning of that dimension. In order tocreate more interpretable word embeddings,we transform pretrained dense word embed-dings into sparse embeddings. These new em-beddings are inherently interpretable: each oftheir dimensions is created from and repre-sents a natural language word or specific gram-matical concept. We construct these embed-dings through sparse coding, where each vec-tor in the basis set is itself a word embedding.Therefore, each dimension of our sparse vec-tors corresponds to a natural language word.We also show that models trained using thesesparse embeddings can achieve good perfor-mance and are more interpretable in practice,including through human evaluations.

View on arXiv
Comments on this paper