Modern policy gradient algorithms, such as TRPO and PPO, outperform vanilla policy gradient in many RL tasks. Questioning the common belief that enforcing approximate trust regions leads to steady policy improvement in practice, we show that the more critical factor is the enhanced value estimation accuracy from more value update steps in each iteration. To demonstrate, we show that by simply increasing the number of value update steps per iteration, vanilla policy gradient itself can achieve performance comparable to or better than PPO in all the standard continuous control benchmark environments. Importantly, this simple change to vanilla policy gradient is significantly more robust to hyperparameter choices, opening up the possibility that RL algorithms may still become more effective and easier to use.
View on arXiv@article{wang2025_2505.19247, title={ Improving Value Estimation Critically Enhances Vanilla Policy Gradient }, author={ Tao Wang and Ruipeng Zhang and Sicun Gao }, journal={arXiv preprint arXiv:2505.19247}, year={ 2025 } }