232

Fast Global Convergence of Policy Optimization for Constrained MDPs

Abstract

We address the issue of safety in reinforcement learning. We pose the problem in a discounted infinite-horizon constrained Markov decision process framework. Existing results have shown that gradient-based methods are able to achieve an O(1/T)\mathcal{O}(1/\sqrt{T}) global convergence rate both for the optimality gap and the constraint violation. We exhibit a natural policy gradient-based algorithm that has a faster convergence rate O(log(T)/T)\mathcal{O}(\log(T)/T) for both the optimality gap and the constraint violation. When Slater's condition is satisfied and known a priori, zero constraint violation can be further guaranteed for a sufficiently large TT while maintaining the same convergence rate.

View on arXiv
Comments on this paper