ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.08431
14
5

Policy Gradient Converges to the Globally Optimal Policy for Nearly Linear-Quadratic Regulators

15 March 2023
Yin-Huan Han
Meisam Razaviyayn
Renyuan Xu
ArXivPDFHTML
Abstract

Nonlinear control systems with partial information to the decision maker are prevalent in a variety of applications. As a step toward studying such nonlinear systems, this work explores reinforcement learning methods for finding the optimal policy in the nearly linear-quadratic regulator systems. In particular, we consider a dynamic system that combines linear and nonlinear components, and is governed by a policy with the same structure. Assuming that the nonlinear component comprises kernels with small Lipschitz coefficients, we characterize the optimization landscape of the cost function. Although the cost function is nonconvex in general, we establish the local strong convexity and smoothness in the vicinity of the global optimizer. Additionally, we propose an initialization mechanism to leverage these properties. Building on the developments, we design a policy gradient algorithm that is guaranteed to converge to the globally optimal policy with a linear rate.

View on arXiv
@article{han2025_2303.08431,
  title={ Policy Gradient Converges to the Globally Optimal Policy for Nearly Linear-Quadratic Regulators },
  author={ Yinbin Han and Meisam Razaviyayn and Renyuan Xu },
  journal={arXiv preprint arXiv:2303.08431},
  year={ 2025 }
}
Comments on this paper