21

A Variance-Reduced Cubic-Regularized Newton for Policy Optimization

Cheng Sun
Zhen Zhang
Shaofu Yang
Main:5 Pages
1 Figures
Bibliography:2 Pages
Appendix:6 Pages
Abstract

In this paper, we study a second-order approach to policy optimization in reinforcement learning. Existing second-order methods often suffer from suboptimal sample complexity or rely on unrealistic assumptions about importance sampling. To overcome these limitations, we propose VR-CR-PN, a variance-reduced cubic-regularized policy Newton algorithm. To the best of our knowledge, this is the first algorithm that integrates Hessian-aided variance reduction with second-order policy optimization, effectively addressing the distribution shift problem and achieving best-known sample complexity under general nonconvex conditions but without the need for importance sampling. We theoretically establish that VR-CR-PN achieves a sample complexity of O~(ϵ3)\tilde{\mathcal{O}}(\epsilon^{-3}) to reach an ϵ\epsilon-second-order stationary point, significantly improving upon the previous best result of O~(ϵ3.5)\tilde{\mathcal{O}}(\epsilon^{-3.5}) under comparable assumptions. As an additional contribution, we introduce a novel Hessian estimator for the expected return function, which admits a uniform upper bound independent of the horizon length HH, allowing the algorithm to achieve horizon-independent sample complexity.

View on arXiv
Comments on this paper