Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.13746
Cited By
Regularized Gradient Descent Ascent for Two-Player Zero-Sum Markov Games
27 May 2022
Sihan Zeng
Thinh T. Doan
J. Romberg
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Regularized Gradient Descent Ascent for Two-Player Zero-Sum Markov Games"
8 / 8 papers shown
Title
Multi-Player Zero-Sum Markov Games with Networked Separable Interactions
Chanwoo Park
K. Zhang
Asuman Ozdaglar
28
8
0
13 Jul 2023
Can We Find Nash Equilibria at a Linear Rate in Markov Games?
Zhuoqing Song
Jason D. Lee
Zhuoran Yang
27
8
0
03 Mar 2023
Abstracting Imperfect Information Away from Two-Player Zero-Sum Games
Samuel Sokota
Ryan DÓrazio
Chun Kai Ling
David J. Wu
J. Zico Kolter
Noam Brown
24
4
0
22 Jan 2023
Faster Last-iterate Convergence of Policy Optimization in Zero-Sum Markov Games
Shicong Cen
Yuejie Chi
S. Du
Lin Xiao
51
35
0
03 Oct 2022
A Two-Time-Scale Stochastic Optimization Framework with Applications in Control and Reinforcement Learning
Sihan Zeng
Thinh T. Doan
J. Romberg
63
22
0
29 Sep 2021
Policy Mirror Descent for Reinforcement Learning: Linear Convergence, New Sampling Complexity, and Generalized Problem Classes
Guanghui Lan
89
136
0
30 Jan 2021
Independent Policy Gradient Methods for Competitive Reinforcement Learning
C. Daskalakis
Dylan J. Foster
Noah Golowich
62
158
0
11 Jan 2021
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
133
1,198
0
16 Aug 2016
1