81
v1v2v3 (latest)

Differential syntactic and semantic encoding in LLMs

Santiago Acevedo
Alessandro Laio
Marco Baroni
Main:7 Pages
19 Figures
Bibliography:5 Pages
5 Tables
Appendix:11 Pages
Abstract

We study how syntactic and semantic information is encoded in inner layer representations of Large Language Models (LLMs), focusing on the very large DeepSeek-V3. We find that, by averaging hidden-representation vectors of sentences sharing syntactic structure or meaning, we obtain vectors that capture a significant proportion of the syntactic and semantic information contained in the representations. In particular, subtracting these syntactic and semantic ``centroids'' from sentence vectors strongly affects their similarity with syntactically and semantically matched sentences, respectively, suggesting that syntax and semantics are, at least partially, linearly encoded. We also find that the cross-layer encoding profiles of syntax and semantics are different, and that the two signals can to some extent be decoupled, suggesting differential encoding of these two types of linguistic information in LLM representations.

View on arXiv
Comments on this paper