4

Sliding Window Recurrences for Sequence Models

Dragos Secrieru
Garyk Brixi
Yoshua Bengio
Taiji Suzuki
Michael Poli
Stefano Massaroli
Main:23 Pages
18 Figures
Bibliography:3 Pages
2 Tables
Appendix:4 Pages
Abstract

Multi-hybrid architectures are poised to take over language modeling due to better quality and performance. We introduce a hierarchical decomposition framework for linear recurrences that allows us to develop algorithms aligned with GPU memory hierarchies, yielding Sliding Window Recurrences. We focus specifically on truncating recurrences to hardware-aligned windows which are naturally jagged, limiting costly inter-warp communication. Using SWR, we develop Phalanx layers that serve as drop-in replacements for windowed attention or linear recurrences. In 1B parameter multi-hybrid models, Phalanx achieves over 10-40% speedup across 4K to 32K context length over optimized Transformers while matching perplexity.

View on arXiv
Comments on this paper