ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.20073
36
1

RAGEN: Understanding Self-Evolution in LLM Agents via Multi-Turn Reinforcement Learning

24 April 2025
Z. Wang
K. Wang
Q. Wang
Pingyue Zhang
Linjie Li
Z. Yang
Kefan Yu
Minh Nhat Nguyen
Licheng Liu
Eli Gottlieb
Monica Lam
Yiping Lu
Kyunghyun Cho
Jiajun Wu
L. Fei-Fei
Lijuan Wang
Yejin Choi
Manling Li
ArXivPDFHTML
Abstract

Training large language models (LLMs) as interactive agents presents unique challenges including long-horizon decision making and interacting with stochastic environment feedback. While reinforcement learning (RL) has enabled progress in static tasks, multi-turn agent RL training remains underexplored. We propose StarPO (State-Thinking-Actions-Reward Policy Optimization), a general framework for trajectory-level agent RL, and introduce RAGEN, a modular system for training and evaluating LLM agents. Our study on three stylized environments reveals three core findings. First, our agent RL training shows a recurring mode of Echo Trap where reward variance cliffs and gradient spikes; we address this with StarPO-S, a stabilized variant with trajectory filtering, critic incorporation, and decoupled clipping. Second, we find the shaping of RL rollouts would benefit from diverse initial states, medium interaction granularity and more frequent sampling. Third, we show that without fine-grained, reasoning-aware reward signals, agent reasoning hardly emerge through multi-turn RL and they may show shallow strategies or hallucinated thoughts. Code and environments are available atthis https URL.

View on arXiv
@article{wang2025_2504.20073,
  title={ RAGEN: Understanding Self-Evolution in LLM Agents via Multi-Turn Reinforcement Learning },
  author={ Zihan Wang and Kangrui Wang and Qineng Wang and Pingyue Zhang and Linjie Li and Zhengyuan Yang and Kefan Yu and Minh Nhat Nguyen and Licheng Liu and Eli Gottlieb and Monica Lam and Yiping Lu and Kyunghyun Cho and Jiajun Wu and Li Fei-Fei and Lijuan Wang and Yejin Choi and Manling Li },
  journal={arXiv preprint arXiv:2504.20073},
  year={ 2025 }
}
Comments on this paper