By virtue of linguistic compositionality, few syntactic rules and a finite lexicon can generate an unbounded number of sentences. That is, language, though seemingly high-dimensional, can be explained using relatively few degrees of freedom. An open question is whether contemporary language models (LMs) reflect the intrinsic simplicity of language that is enabled by compositionality. We take a geometric view of this problem by relating the degree of compositionality in a dataset to the intrinsic dimension (ID) of its representations under an LM, a measure of feature complexity. We find not only that the degree of dataset compositionality is reflected in representations' ID, but that the relationship between compositionality and geometric complexity arises due to learned linguistic features over training. Finally, our analyses reveal a striking contrast between nonlinear and linear dimensionality, showing they respectively encode semantic and superficial aspects of linguistic composition.
View on arXiv@article{lee2025_2410.01444, title={ Geometric Signatures of Compositionality Across a Language Model's Lifetime }, author={ Jin Hwa Lee and Thomas Jiralerspong and Lei Yu and Yoshua Bengio and Emily Cheng }, journal={arXiv preprint arXiv:2410.01444}, year={ 2025 } }