36
1

Flash STU: Fast Spectral Transform Units

Abstract

Recent advances in state-space model architectures have shown great promise for efficient sequence modeling, but challenges remain in balancing computational efficiency with model expressiveness. We propose the Flash STU architecture, a hybrid model that interleaves spectral state space model layers with sliding window attention, enabling scalability to billions of parameters for language modeling while maintaining a near-linear time complexity. We evaluate the Flash STU and its variants on diverse sequence prediction tasks, including linear dynamical systems, robotics control, and language modeling. We find that, given a fixed parameter budget, the Flash STU architecture consistently outperforms the Transformer and other leading state-space models such as S4 and Mamba-2.

View on arXiv
@article{liu2025_2409.10489,
  title={ Flash STU: Fast Spectral Transform Units },
  author={ Y. Isabel Liu and Windsor Nguyen and Yagiz Devre and Evan Dogariu and Anirudha Majumdar and Elad Hazan },
  journal={arXiv preprint arXiv:2409.10489},
  year={ 2025 }
}
Comments on this paper