ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.01115
  4. Cited By
Convergence Rate of $\mathcal{O}(1/k)$ for Optimistic Gradient and
  Extra-gradient Methods in Smooth Convex-Concave Saddle Point Problems

Convergence Rate of O(1/k)\mathcal{O}(1/k)O(1/k) for Optimistic Gradient and Extra-gradient Methods in Smooth Convex-Concave Saddle Point Problems

3 June 2019
Aryan Mokhtari
Asuman Ozdaglar
S. Pattathil
ArXivPDFHTML

Papers citing "Convergence Rate of $\mathcal{O}(1/k)$ for Optimistic Gradient and Extra-gradient Methods in Smooth Convex-Concave Saddle Point Problems"

4 / 4 papers shown
Title
Distributed Statistical Min-Max Learning in the Presence of Byzantine
  Agents
Distributed Statistical Min-Max Learning in the Presence of Byzantine Agents
Arman Adibi
A. Mitra
George J. Pappas
Hamed Hassani
19
3
0
07 Apr 2022
Adaptive extra-gradient methods for min-max optimization and games
Adaptive extra-gradient methods for min-max optimization and games
Kimon Antonakopoulos
E. V. Belmega
P. Mertikopoulos
54
45
0
22 Oct 2020
A Decentralized Proximal Point-type Method for Saddle Point Problems
A Decentralized Proximal Point-type Method for Saddle Point Problems
Weijie Liu
Aryan Mokhtari
Asuman Ozdaglar
S. Pattathil
Zebang Shen
Nenggan Zheng
59
30
0
31 Oct 2019
Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave
  Saddle Point Problems without Strong Convexity
Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave Saddle Point Problems without Strong Convexity
S. Du
Wei Hu
53
120
0
05 Feb 2018
1