20

InfMem: Learning System-2 Memory Control for Long-Context Agent

Xinyu Wang
Mingze Li
Peng Lu
Xiao-Wen Chang
Lifeng Shang
Jinping Li
Fei Mi
Prasanna Parthasarathi
Yufei Cui
Main:8 Pages
13 Figures
Bibliography:3 Pages
9 Tables
Appendix:12 Pages
Abstract

Reasoning over ultra-long documents requires synthesizing sparse evidence scattered across distant segments under strict memory constraints. While streaming agents enable scalable processing, their passive memory update strategy often fails to preserve low-salience bridging evidence required for multi-hop reasoning. We propose InfMem, a control-centric agent that instantiates System-2-style control via a PreThink-Retrieve-Write protocol. InfMem actively monitors evidence sufficiency, performs targeted in-document retrieval, and applies evidence-aware joint compression to update a bounded memory. To ensure reliable control, we introduce a practical SFT-to-RL training recipe that aligns retrieval, writing, and stopping decisions with end-task correctness. On ultra-long QA benchmarks from 32k to 1M tokens, InfMem consistently outperforms MemAgent across backbones. Specifically, InfMem improves average absolute accuracy by +10.17, +11.84, and +8.23 points on Qwen3-1.7B, Qwen3-4B, and Qwen2.5-7B, respectively, while reducing inference time by 3.9×3.9\times on average (up to 5.1×5.1\times) via adaptive early stopping.

View on arXiv
Comments on this paper