ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.11032
21
8

Optimistic Natural Policy Gradient: a Simple Efficient Policy Optimization Framework for Online RL

18 May 2023
Qinghua Liu
Gellert Weisz
András Gyorgy
Chi Jin
Csaba Szepesvári
    OffRL
ArXivPDFHTML
Abstract

While policy optimization algorithms have played an important role in recent empirical success of Reinforcement Learning (RL), the existing theoretical understanding of policy optimization remains rather limited -- they are either restricted to tabular MDPs or suffer from highly suboptimal sample complexity, especial in online RL where exploration is necessary. This paper proposes a simple efficient policy optimization framework -- Optimistic NPG for online RL. Optimistic NPG can be viewed as a simple combination of the classic natural policy gradient (NPG) algorithm [Kakade, 2001] with optimistic policy evaluation subroutines to encourage exploration. For ddd-dimensional linear MDPs, Optimistic NPG is computationally efficient, and learns an ε\varepsilonε-optimal policy within O~(d2/ε3)\tilde{O}(d^2/\varepsilon^3)O~(d2/ε3) samples, which is the first computationally efficient algorithm whose sample complexity has the optimal dimension dependence Θ~(d2)\tilde{\Theta}(d^2)Θ~(d2). It also improves over state-of-the-art results of policy optimization algorithms [Zanette et al., 2021] by a factor of ddd. In the realm of general function approximation, which subsumes linear MDPs, Optimistic NPG, to our best knowledge, stands as the first policy optimization algorithm that achieves polynomial sample complexity for learning near-optimal policies.

View on arXiv
Comments on this paper