ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.02813
  4. Cited By
MDPGT: Momentum-based Decentralized Policy Gradient Tracking

MDPGT: Momentum-based Decentralized Policy Gradient Tracking

6 December 2021
Zhanhong Jiang
Xian Yeow Lee
Sin Yong Tan
Kai Liang Tan
Aditya Balu
Young M. Lee
Chinmay Hegde
Soumik Sarkar
ArXivPDFHTML

Papers citing "MDPGT: Momentum-based Decentralized Policy Gradient Tracking"

4 / 4 papers shown
Title
Natural Policy Gradient and Actor Critic Methods for Constrained
  Multi-Task Reinforcement Learning
Natural Policy Gradient and Actor Critic Methods for Constrained Multi-Task Reinforcement Learning
Sihan Zeng
Thinh T. Doan
Justin Romberg
32
0
0
03 May 2024
Decentralized Federated Policy Gradient with Byzantine Fault-Tolerance
  and Provably Fast Convergence
Decentralized Federated Policy Gradient with Byzantine Fault-Tolerance and Provably Fast Convergence
Philip Jordan
Florian Grötschla
Flint Xiaofeng Fan
Roger Wattenhofer
FedML
39
2
0
07 Jan 2024
DePAint: A Decentralized Safe Multi-Agent Reinforcement Learning
  Algorithm considering Peak and Average Constraints
DePAint: A Decentralized Safe Multi-Agent Reinforcement Learning Algorithm considering Peak and Average Constraints
Raheeb Hassan
K. M. S. Wadith
Md. Mamun-or Rashid
Md. Mosaddek Khan
32
2
0
22 Oct 2023
On the Convergence and Sample Efficiency of Variance-Reduced Policy
  Gradient Method
On the Convergence and Sample Efficiency of Variance-Reduced Policy Gradient Method
Junyu Zhang
Chengzhuo Ni
Zheng Yu
Csaba Szepesvári
Mengdi Wang
69
67
0
17 Feb 2021
1