ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.07496
  4. Cited By
Beyond the Policy Gradient Theorem for Efficient Policy Updates in
  Actor-Critic Algorithms

Beyond the Policy Gradient Theorem for Efficient Policy Updates in Actor-Critic Algorithms

15 February 2022
Romain Laroche
Rémi Tachet des Combes
ArXivPDFHTML

Papers citing "Beyond the Policy Gradient Theorem for Efficient Policy Updates in Actor-Critic Algorithms"

3 / 3 papers shown
Title
Coordinate Ascent for Off-Policy RL with Global Convergence Guarantees
Coordinate Ascent for Off-Policy RL with Global Convergence Guarantees
Hsin-En Su
Yen-Ju Chen
Ping-Chun Hsieh
Xi Liu
OffRL
18
0
0
10 Dec 2022
The Primacy Bias in Deep Reinforcement Learning
The Primacy Bias in Deep Reinforcement Learning
Evgenii Nikishin
Max Schwarzer
P. DÓro
Pierre-Luc Bacon
Aaron C. Courville
OnRL
90
178
0
16 May 2022
On the Sample Complexity of Actor-Critic Method for Reinforcement
  Learning with Function Approximation
On the Sample Complexity of Actor-Critic Method for Reinforcement Learning with Function Approximation
Harshat Kumar
Alec Koppel
Alejandro Ribeiro
99
79
0
18 Oct 2019
1