23
0

Block-Biased Mamba for Long-Range Sequence Processing

Abstract

Mamba extends earlier state space models (SSMs) by introducing input-dependent dynamics, and has demonstrated strong empirical performance across a range of domains, including language modeling, computer vision, and foundation models. However, a surprising weakness remains: despite being built on architectures designed for long-range dependencies, Mamba performs poorly on long-range sequential tasks. Understanding and addressing this gap is important for improving Mamba's universality and versatility. In this work, we analyze Mamba's limitations through three perspectives: expressiveness, inductive bias, and training stability. Our theoretical results show how Mamba falls short in each of these aspects compared to earlier SSMs such as S4D. To address these issues, we propose B2S6\text{B}_2\text{S}_6, a simple extension of Mamba's S6 unit that combines block-wise selective dynamics with a channel-specific bias. We prove that these changes equip the model with a better-suited inductive bias and improve its expressiveness and stability. Empirically, B2S6\text{B}_2\text{S}_6 outperforms S4 and S4D on Long-Range Arena (LRA) tasks while maintaining Mamba's performance on language modeling benchmarks.

View on arXiv
@article{yu2025_2505.09022,
  title={ Block-Biased Mamba for Long-Range Sequence Processing },
  author={ Annan Yu and N. Benjamin Erichson },
  journal={arXiv preprint arXiv:2505.09022},
  year={ 2025 }
}
Comments on this paper