Linear recurrent neural networks (RNNs) and state-space models (SSMs) such as Mamba have become promising alternatives to softmax-attention as sequence mixing layers in Transformer architectures. Current models, however, do not exhibit the full state-tracking expressivity of RNNs because they rely on channel-wise (i.e. diagonal) sequence mixing. In this paper, we propose to compute a dense linear RNN as the fixed-point of a parallelizable diagonal linear RNN in a single layer. We explore mechanisms to improve its memory and state-tracking abilities in practice, and achieve state-of-the-art results on the commonly used toy tasks , , copying, and modular arithmetics. We hope our results will open new avenues to more expressive and efficient sequence mixers.
View on arXiv@article{movahedi2025_2503.10799, title={ Fixed-Point RNNs: From Diagonal to Dense in a Few Iterations }, author={ Sajad Movahedi and Felix Sarnthein and Nicola Muca Cirone and Antonio Orvieto }, journal={arXiv preprint arXiv:2503.10799}, year={ 2025 } }