55
0

Improving Retrospective Language Agents via Joint Policy Gradient Optimization

Abstract

In recent research advancements within the community, large language models (LLMs) have sparked great interest in creating autonomous agents. However, current prompt-based agents often heavily rely on large-scale LLMs. Meanwhile, although fine-tuning methods significantly enhance the capabilities of smaller LLMs, the fine-tuned agents often lack the potential for self-reflection and self-improvement. To address these challenges, we introduce a novel agent framework named RetroAct, which is a framework that jointly optimizes both task-planning and self-reflective evolution capabilities in language agents. Specifically, we develop a two-stage joint optimization process that integrates imitation learning and reinforcement learning, and design an off-policy joint policy gradient optimization algorithm with imitation learning regularization to enhance the data efficiency and training stability in agent tasks. RetroAct significantly improves the performance of open-source models, reduces dependency on closed-source LLMs, and enables fine-tuned agents to learn and evolve continuously. We conduct extensive experiments across various testing environments, demonstrating RetroAct has substantial improvements in task performance and decision-making processes.

View on arXiv
@article{feng2025_2503.01490,
  title={ Improving Retrospective Language Agents via Joint Policy Gradient Optimization },
  author={ Xueyang Feng and Bo Lan and Quanyu Dai and Lei Wang and Jiakai Tang and Xu Chen and Zhenhua Dong and Ji-Rong Wen },
  journal={arXiv preprint arXiv:2503.01490},
  year={ 2025 }
}
Comments on this paper