59

Remember Me, Refine Me: A Dynamic Procedural Memory Framework for Experience-Driven Agent Evolution

Zouying Cao
Jiaji Deng
Li Yu
Weikang Zhou
Zhaoyang Liu
Bolin Ding
Hai Zhao
Main:8 Pages
10 Figures
Bibliography:2 Pages
9 Tables
Appendix:6 Pages
Abstract

Procedural memory enables large language model (LLM) agents to internalize "how-to" knowledge, theoretically reducing redundant trial-and-error. However, existing frameworks predominantly suffer from a "passive accumulation" paradigm, treating memory as a static append-only archive. To bridge the gap between static storage and dynamic reasoning, we propose ReMe\textbf{ReMe} (Remember Me, Refine Me\textit{Remember Me, Refine Me}), a comprehensive framework for experience-driven agent evolution. ReMe innovates across the memory lifecycle via three mechanisms: 1) multi-faceted distillation\textit{multi-faceted distillation}, which extracts fine-grained experiences by recognizing success patterns, analyzing failure triggers and generating comparative insights; 2) context-adaptive reuse\textit{context-adaptive reuse}, which tailors historical insights to new contexts via scenario-aware indexing; and 3) utility-based refinement\textit{utility-based refinement}, which autonomously adds valid memories and prunes outdated ones to maintain a compact, high-quality experience pool. Extensive experiments on BFCL-V3 and AppWorld demonstrate that ReMe establishes a new state-of-the-art in agent memory system. Crucially, we observe a significant memory-scaling effect: Qwen3-8B equipped with ReMe outperforms larger, memoryless Qwen3-14B, suggesting that self-evolving memory provides a computation-efficient pathway for lifelong learning. We release our code and the this http URL\texttt{this http URL} dataset to facilitate further research.

View on arXiv
Comments on this paper