72

The End of Transformers? On Challenging Attention and the Rise of Sub-Quadratic Architectures

Main:16 Pages
2 Figures
Bibliography:4 Pages
3 Tables
Appendix:1 Pages
Abstract

Transformers have dominated sequence processing tasks for the past seven years -- most notably language modeling. However, the inherent quadratic complexity of their attention mechanism remains a significant bottleneck as context length increases. This paper surveys recent efforts to overcome this bottleneck, including advances in (sub-quadratic) attention variants, recurrent neural networks, state space models, and hybrid architectures. We critically analyze these approaches in terms of compute and memory complexity, benchmark results, and fundamental limitations to assess whether the dominance of pure-attention transformers may soon be challenged.

View on arXiv
Comments on this paper