ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.01491
15
21

Sample Complexity of Policy Gradient Finding Second-Order Stationary Points

2 December 2020
Long Yang
Qian Zheng
Gang Pan
ArXivPDFHTML
Abstract

The goal of policy-based reinforcement learning (RL) is to search the maximal point of its objective. However, due to the inherent non-concavity of its objective, convergence to a first-order stationary point (FOSP) can not guarantee the policy gradient methods finding a maximal point. A FOSP can be a minimal or even a saddle point, which is undesirable for RL. Fortunately, if all the saddle points are \emph{strict}, all the second-order stationary points (SOSP) are exactly equivalent to local maxima. Instead of FOSP, we consider SOSP as the convergence criteria to character the sample complexity of policy gradient. Our result shows that policy gradient converges to an (ϵ,ϵχ)(\epsilon,\sqrt{\epsilon\chi})(ϵ,ϵχ​)-SOSP with probability at least 1−O~(δ)1-\widetilde{\mathcal{O}}(\delta)1−O(δ) after the total cost of O(ϵ−92(1−γ)χlog⁡1δ)\mathcal{O}\left(\dfrac{\epsilon^{-\frac{9}{2}}}{(1-\gamma)\sqrt\chi}\log\dfrac{1}{\delta}\right)O((1−γ)χ​ϵ−29​​logδ1​), where γ∈(0,1)\gamma\in(0,1)γ∈(0,1). Our result improves the state-of-the-art result significantly where it requires O(ϵ−9χ32δlog⁡1ϵχ)\mathcal{O}\left(\dfrac{\epsilon^{-9}\chi^{\frac{3}{2}}}{\delta}\log\dfrac{1}{\epsilon\chi}\right)O(δϵ−9χ23​​logϵχ1​). Our analysis is based on the key idea that decomposes the parameter space Rp\mathbb{R}^pRp into three non-intersected regions: non-stationary point, saddle point, and local optimal region, then making a local improvement of the objective of RL in each region. This technique can be potentially generalized to extensive policy gradient methods.

View on arXiv
Comments on this paper