Large language models (LLMs) have made significant advances in the field of natural language processing, but they still face challenges such as continuous decision-making. In this research, we propose a novel framework by integrating iterative feedback, reflective mechanisms, and a memory optimization mechanism based on the Ebbinghaus forgetting curve, it significantly enhances the agents' capabilities in handling multi-tasking and long-span information.
View on arXiv@article{liang2025_2409.00872, title={ Self-evolving Agents with reflective and memory-augmented abilities }, author={ Xuechen Liang and Yangfan He and Yinghui Xia and Xinyuan Song and Jianhui Wang and Meiling Tao and Li Sun and Xinhang Yuan and Jiayi Su and Keqin Li and Jiaqi Chen and Jinsong Yang and Siyuan Chen and Tianyu Shi }, journal={arXiv preprint arXiv:2409.00872}, year={ 2025 } }