Mamba's theoretical infinite-context potential is limited in practice when sequences far exceed training lengths. This work explores unlocking Mamba's long-context memory ability by a simple-yet-effective method, Recall with Reasoning (RwR), by distilling chain-of-thought (CoT) summarization from a teacher model. Specifically, RwR prepends these summarization as CoT prompts during fine-tuning, teaching Mamba to actively recall and reason over long contexts. Experiments on LONGMEMEVAL and HELMET show RwR boosts Mamba's long-context performance against comparable Transformer/hybrid baselines under similar pretraining conditions, while preserving short-context capabilities, all without architectural changes.
View on arXiv@article{ma2025_2505.03320, title={ Recall with Reasoning: Chain-of-Thought Distillation for Mamba's Long-Context Memory and Extrapolation }, author={ Junyu Ma and Tianqing Fang and Zhisong Zhang and Hongming Zhang and Haitao Mi and Dong Yu }, journal={arXiv preprint arXiv:2505.03320}, year={ 2025 } }