The quadratic complexity of self-attention prevents transformers from scaling effectively to long input sequences. On the other hand, modern GPUs and other specialized hardware accelerators are well-optimized for processing small input sequences in transformers during both training and inference. A natural question arises: can we take advantage of the efficiency of small transformers to deal with long input sequences?In this paper, we show that transformers with long input sequences (large transformers) can be efficiently simulated by transformers that can only take short input sequences (small transformers). Specifically, we prove that any transformer with input length can be efficiently simulated by only transformers with input length , and that this cannot be improved in the worst case. However, we then prove that in various natural scenarios including average-case inputs, sliding window masking and attention sinks, the optimal number of small transformers suffice.
View on arXiv@article{yu2025_2506.12220, title={ Two Heads Are Better than One: Simulating Large Transformers with Small Ones }, author={ Hantao Yu and Josh Alman }, journal={arXiv preprint arXiv:2506.12220}, year={ 2025 } }