ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.08431
  4. Cited By
Policy Gradient Converges to the Globally Optimal Policy for Nearly Linear-Quadratic Regulators

Policy Gradient Converges to the Globally Optimal Policy for Nearly Linear-Quadratic Regulators

15 March 2023
Yin-Huan Han
Meisam Razaviyayn
Renyuan Xu
ArXivPDFHTML

Papers citing "Policy Gradient Converges to the Globally Optimal Policy for Nearly Linear-Quadratic Regulators"

4 / 4 papers shown
Title
Fast Policy Learning for Linear Quadratic Control with Entropy
  Regularization
Fast Policy Learning for Linear Quadratic Control with Entropy Regularization
Xin Guo
Xinyu Li
Renyuan Xu
26
3
0
23 Nov 2023
A Fisher-Rao gradient flow for entropy-regularised Markov decision
  processes in Polish spaces
A Fisher-Rao gradient flow for entropy-regularised Markov decision processes in Polish spaces
B. Kerimkulov
J. Leahy
David Siska
Lukasz Szpruch
Yufei Zhang
16
7
0
04 Oct 2023
Learning Zero-Sum Linear Quadratic Games with Improved Sample Complexity
  and Last-Iterate Convergence
Learning Zero-Sum Linear Quadratic Games with Improved Sample Complexity and Last-Iterate Convergence
Jiduan Wu
Anas Barakat
Ilyas Fatkhullin
Niao He
18
5
0
08 Sep 2023
On Linear Convergence of Policy Gradient Methods for Finite MDPs
On Linear Convergence of Policy Gradient Methods for Finite MDPs
Jalaj Bhandari
Daniel Russo
48
59
0
21 Jul 2020
1