ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19247
10
0

Improving Value Estimation Critically Enhances Vanilla Policy Gradient

25 May 2025
Tao Wang
Ruipeng Zhang
Sicun Gao
    OffRL
ArXivPDFHTML
Abstract

Modern policy gradient algorithms, such as TRPO and PPO, outperform vanilla policy gradient in many RL tasks. Questioning the common belief that enforcing approximate trust regions leads to steady policy improvement in practice, we show that the more critical factor is the enhanced value estimation accuracy from more value update steps in each iteration. To demonstrate, we show that by simply increasing the number of value update steps per iteration, vanilla policy gradient itself can achieve performance comparable to or better than PPO in all the standard continuous control benchmark environments. Importantly, this simple change to vanilla policy gradient is significantly more robust to hyperparameter choices, opening up the possibility that RL algorithms may still become more effective and easier to use.

View on arXiv
@article{wang2025_2505.19247,
  title={ Improving Value Estimation Critically Enhances Vanilla Policy Gradient },
  author={ Tao Wang and Ruipeng Zhang and Sicun Gao },
  journal={arXiv preprint arXiv:2505.19247},
  year={ 2025 }
}
Comments on this paper