Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2012.15511
Cited By
Towards Understanding Asynchronous Advantage Actor-critic: Convergence and Linear Speedup
31 December 2020
Han Shen
K. Zhang
Min-Fong Hong
Tianyi Chen
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards Understanding Asynchronous Advantage Actor-critic: Convergence and Linear Speedup"
7 / 7 papers shown
Title
Achieving Tighter Finite-Time Rates for Heterogeneous Federated Stochastic Approximation under Markovian Sampling
Feng Zhu
Aritra Mitra
Robert W. Heath
FedML
36
0
0
15 Apr 2025
Asynchronous Federated Reinforcement Learning with Policy Gradient Updates: Algorithm Design and Convergence Analysis
Guangchen Lan
Dong-Jun Han
Abolfazl Hashemi
Vaneet Aggarwal
Christopher G. Brinton
124
15
0
09 Apr 2024
CAESAR: Enhancing Federated RL in Heterogeneous MDPs through Convergence-Aware Sampling with Screening
Hei Yi Mak
Flint Xiaofeng Fan
Luca A. Lanzendörfer
Cheston Tan
Wei Tsang Ooi
Roger Wattenhofer
FedML
24
2
0
29 Mar 2024
Distributed TD(0) with Almost No Communication
R. Liu
Alexander Olshevsky
FedML
15
14
0
16 Apr 2021
On Linear Convergence of Policy Gradient Methods for Finite MDPs
Jalaj Bhandari
Daniel Russo
55
59
0
21 Jul 2020
A Finite Time Analysis of Two Time-Scale Actor Critic Methods
Yue Wu
Weitong Zhang
Pan Xu
Quanquan Gu
90
145
0
04 May 2020
On the Sample Complexity of Actor-Critic Method for Reinforcement Learning with Function Approximation
Harshat Kumar
Alec Koppel
Alejandro Ribeiro
99
79
0
18 Oct 2019
1