49
v1v2 (latest)

Token Sample Complexity of Attention

Léa Bohbot
Cyril Letrouit
Gabriel Peyré
François-Xavier Vialard
Main:17 Pages
7 Figures
Bibliography:4 Pages
Appendix:23 Pages
Abstract

As context windows in large language models continue to expand, it is essential to characterize how attention behaves at extreme sequence lengths. We introduce token-sample complexity: the rate at which attention computed on nn tokens converges to its infinite-token limit. We estimate finite-nn convergence bounds at two levels: pointwise uniform convergence of the attention map, and convergence of moments for the transformed token distribution. For compactly supported (and more generally sub-Gaussian) distributions, our first result shows that the attention map converges uniformly on a ball of radius RR at rate C(R)/nC(R)/\sqrt{n}, where C(R)C(R) grows exponentially with RR. For large RR, this estimate loses practical value, and our second result addresses this issue by establishing convergence rates for the moments of the transformed distribution (the token output of the attention layer). In this case, the rate is C(R)/nβC'(R)/n^{\beta} with β<12\beta<\tfrac{1}{2}, and C(R)C'(R) depends polynomially on the size of the support of the distribution. The exponent β\beta depends on the attention geometry and the spectral properties of the tokens distribution. We also examine the regime in which the attention parameter tends to infinity and the softmax approaches a hardmax, and in this setting, we establish a logarithmic rate of convergence. Experiments on synthetic Gaussian data and real BERT models on Wikipedia text confirm our predictions.

View on arXiv
Comments on this paper