129

Do language models plan ahead for future tokens?

Main:10 Pages
12 Figures
Bibliography:3 Pages
5 Tables
Appendix:11 Pages
Abstract

Do transformers "think ahead" during inference at a given position? It is known transformers prepare information in the hidden states of the forward pass at tt that is then used in future forward passes t+τt+\tau. We posit two explanations for this phenomenon: pre-caching, in which off-diagonal gradient terms present in training result in the model computing features at tt irrelevant to the present inference task but useful for the future, and breadcrumbs, in which features most relevant to time step tt are already the same as those that would most benefit inference at time t+τt+\tau. We test these hypotheses by training language models without propagating gradients to past timesteps, a scheme we formalize as myopic training. In a synthetic data setting, we find clear evidence for pre-caching. In the autoregressive language modeling setting, our experiments are more suggestive of the breadcrumbs hypothesis.

View on arXiv
Comments on this paper