ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.01906
  4. Cited By
The Last-Iterate Convergence Rate of Optimistic Mirror Descent in
  Stochastic Variational Inequalities

The Last-Iterate Convergence Rate of Optimistic Mirror Descent in Stochastic Variational Inequalities

5 July 2021
Waïss Azizian
F. Iutzeler
J. Malick
P. Mertikopoulos
ArXivPDFHTML

Papers citing "The Last-Iterate Convergence Rate of Optimistic Mirror Descent in Stochastic Variational Inequalities"

9 / 9 papers shown
Title
Adaptively Perturbed Mirror Descent for Learning in Games
Adaptively Perturbed Mirror Descent for Learning in Games
Kenshi Abe
Kaito Ariu
Mitsuki Sakamoto
Atsushi Iwasaki
21
5
0
26 May 2023
The rate of convergence of Bregman proximal methods: Local geometry vs.
  regularity vs. sharpness
The rate of convergence of Bregman proximal methods: Local geometry vs. regularity vs. sharpness
Waïss Azizian
F. Iutzeler
J. Malick
P. Mertikopoulos
6
1
0
15 Nov 2022
On the convergence of policy gradient methods to Nash equilibria in
  general stochastic games
On the convergence of policy gradient methods to Nash equilibria in general stochastic games
Angeliki Giannou
Kyriakos Lotidis
P. Mertikopoulos
Emmanouil-Vasileios Vlatakis-Gkaragkounis
15
17
0
17 Oct 2022
Last-Iterate Convergence with Full and Noisy Feedback in Two-Player
  Zero-Sum Games
Last-Iterate Convergence with Full and Noisy Feedback in Two-Player Zero-Sum Games
Kenshi Abe
Kaito Ariu
Mitsuki Sakamoto
Kenta Toyoshima
Atsushi Iwasaki
26
11
0
21 Aug 2022
No-Regret Learning in Games with Noisy Feedback: Faster Rates and
  Adaptivity via Learning Rate Separation
No-Regret Learning in Games with Noisy Feedback: Faster Rates and Adaptivity via Learning Rate Separation
Yu-Guan Hsieh
Kimon Antonakopoulos
V. Cevher
P. Mertikopoulos
26
26
0
13 Jun 2022
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient
  Methods
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods
Aleksandr Beznosikov
Eduard A. Gorbunov
Hugo Berard
Nicolas Loizou
19
47
0
15 Feb 2022
Extragradient Method: $O(1/K)$ Last-Iterate Convergence for Monotone
  Variational Inequalities and Connections With Cocoercivity
Extragradient Method: O(1/K)O(1/K)O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities and Connections With Cocoercivity
Eduard A. Gorbunov
Nicolas Loizou
Gauthier Gidel
23
64
0
08 Oct 2021
Near-Optimal No-Regret Learning in General Games
Near-Optimal No-Regret Learning in General Games
C. Daskalakis
Maxwell Fishelson
Noah Golowich
25
102
0
16 Aug 2021
Adaptive first-order methods revisited: Convex optimization without
  Lipschitz requirements
Adaptive first-order methods revisited: Convex optimization without Lipschitz requirements
Kimon Antonakopoulos
P. Mertikopoulos
9
12
0
16 Jul 2021
1