ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.05683
  4. Cited By
An Optimal Multistage Stochastic Gradient Method for Minimax Problems

An Optimal Multistage Stochastic Gradient Method for Minimax Problems

13 February 2020
Alireza Fallah
Asuman Ozdaglar
S. Pattathil
ArXivPDFHTML

Papers citing "An Optimal Multistage Stochastic Gradient Method for Minimax Problems"

3 / 3 papers shown
Title
Communication-Efficient Gradient Descent-Accent Methods for Distributed
  Variational Inequalities: Unified Analysis and Local Updates
Communication-Efficient Gradient Descent-Accent Methods for Distributed Variational Inequalities: Unified Analysis and Local Updates
Siqi Zhang
S. Choudhury
Sebastian U. Stich
Nicolas Loizou
FedML
19
3
0
08 Jun 2023
FedChain: Chained Algorithms for Near-Optimal Communication Cost in
  Federated Learning
FedChain: Chained Algorithms for Near-Optimal Communication Cost in Federated Learning
Charlie Hou
K. K. Thekumparampil
Giulia Fanti
Sewoong Oh
FedML
30
14
0
16 Aug 2021
Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave
  Saddle Point Problems without Strong Convexity
Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave Saddle Point Problems without Strong Convexity
S. Du
Wei Hu
53
120
0
05 Feb 2018
1