ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.01399
  4. Cited By
Convergence Proof for Actor-Critic Methods Applied to PPO and RUDDER

Convergence Proof for Actor-Critic Methods Applied to PPO and RUDDER

2 December 2020
Markus Holzleitner
Lukas Gruber
Jose A. Arjona-Medina
Johannes Brandstetter
Sepp Hochreiter
ArXivPDFHTML

Papers citing "Convergence Proof for Actor-Critic Methods Applied to PPO and RUDDER"

4 / 4 papers shown
Title
Value Improved Actor Critic Algorithms
Value Improved Actor Critic Algorithms
Yaniv Oren
Moritz A. Zanger
Pascal R. van der Vaart
M. Spaan
Wendelin Bohmer
Wendelin Bohmer
OffRL
31
0
0
03 Jun 2024
The Definitive Guide to Policy Gradients in Deep Reinforcement Learning:
  Theory, Algorithms and Implementations
The Definitive Guide to Policy Gradients in Deep Reinforcement Learning: Theory, Algorithms and Implementations
Matthias Lehmann
38
0
0
24 Jan 2024
Reactive Exploration to Cope with Non-Stationarity in Lifelong
  Reinforcement Learning
Reactive Exploration to Cope with Non-Stationarity in Lifelong Reinforcement Learning
C. Steinparz
Thomas Schmied
Fabian Paischer
Marius-Constantin Dinu
Vihang Patil
Angela Bitto-Nemling
Hamid Eghbalzadeh
Sepp Hochreiter
CLL
24
11
0
12 Jul 2022
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
179
1,185
0
30 Nov 2014
1