ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.03320
28
0

Recall with Reasoning: Chain-of-Thought Distillation for Mamba's Long-Context Memory and Extrapolation

6 May 2025
Junyu Ma
Tianqing Fang
Z. Zhang
Hongming Zhang
Haitao Mi
Dong Yu
    ReLM
    RALM
    LRM
ArXivPDFHTML
Abstract

Mamba's theoretical infinite-context potential is limited in practice when sequences far exceed training lengths. This work explores unlocking Mamba's long-context memory ability by a simple-yet-effective method, Recall with Reasoning (RwR), by distilling chain-of-thought (CoT) summarization from a teacher model. Specifically, RwR prepends these summarization as CoT prompts during fine-tuning, teaching Mamba to actively recall and reason over long contexts. Experiments on LONGMEMEVAL and HELMET show RwR boosts Mamba's long-context performance against comparable Transformer/hybrid baselines under similar pretraining conditions, while preserving short-context capabilities, all without architectural changes.

View on arXiv
@article{ma2025_2505.03320,
  title={ Recall with Reasoning: Chain-of-Thought Distillation for Mamba's Long-Context Memory and Extrapolation },
  author={ Junyu Ma and Tianqing Fang and Zhisong Zhang and Hongming Zhang and Haitao Mi and Dong Yu },
  journal={arXiv preprint arXiv:2505.03320},
  year={ 2025 }
}
Comments on this paper