396
v1v2 (latest)

All or None: Identifiable Linear Properties of Next-token Predictors in Language Modeling

International Conference on Artificial Intelligence and Statistics (AISTATS), 2024
Main:8 Pages
4 Figures
Bibliography:4 Pages
Appendix:25 Pages
Abstract

We analyze identifiability as a possible explanation for the ubiquity of linear properties across language models, such as the vector difference between the representations of "easy" and "easiest" being parallel to that between "lucky" and "luckiest". For this, we ask whether finding a linear property in one model implies that any model that induces the same distribution has that property, too. To answer that, we first prove an identifiability result to characterize distribution-equivalent next-token predictors, lifting a diversity requirement of previous results. Second, based on a refinement of relational linearity [Paccanaro and Hinton, 2001; Hernandez et al., 2024], we show how many notions of linearity are amenable to our analysis. Finally, we show that under suitable conditions, these linear properties either hold in all or none distribution-equivalent next-token predictors.

View on arXiv
Comments on this paper