14

An Approximate Ascent Approach To Prove Convergence of PPO

Leif Doering
Daniel Schmidt
Moritz Melcher
Sebastian Kassing
Benedikt Wille
Tilman Aach
Simon Weissmann
Main:12 Pages
8 Figures
Bibliography:4 Pages
1 Tables
Appendix:47 Pages
Abstract

Proximal Policy Optimization (PPO) is among the most widely used deep reinforcement learning algorithms, yet its theoretical foundations remain incomplete. Most importantly, convergence and understanding of fundamental PPO advantages remain widely open. Under standard theory assumptions we show how PPO's policy update scheme (performing multiple epochs of minibatch updates on multi-use rollouts with a surrogate gradient) can be interpreted as approximated policy gradient ascent. We show how to control the bias accumulated by the surrogate gradients and use techniques from random reshuffling to prove a convergence theorem for PPO that sheds light on PPO's success. Additionally, we identify a previously overlooked issue in truncated Generalized Advantage Estimation commonly used in PPO. The geometric weighting scheme induces infinite mass collapse onto the longest kk-step advantage estimator at episode boundaries. Empirical evaluations show that a simple weight correction can yield substantial improvements in environments with strong terminal signal, such as Lunar Lander.

View on arXiv
Comments on this paper