All Papers
Title |
|---|
Title |
|---|

Large language model (LLM)-based agents have shown strong potential in multi-task scenarios, owing to their ability to transfer knowledge across diverse tasks. However, existing approaches often treat prior experiences and knowledge as monolithic units, leading to inefficient and coarse-grained knowledge transfer. In this work, we propose a novel hierarchical memory architecture that enables fine-grained knowledge transfer by decoupling high-level planning memory from low-level execution memory. To construct and refine these hierarchical memories, we introduce Hierarchical Hindsight Reflection (HR), a mechanism that distills reusable and hierarchical knowledge from past agent-environment interactions. At test time, HR performs retrievals of high-level and low-level memories separately, allowing LLM-based agents to efficiently access and utilize task-relevant knowledge for newthis http URLresults across two benchmarks demonstrate that HR can improve generalization and decision-making performance, outperforming prior baselines such as Expel.
View on arXiv