ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.04640
11
30

Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Methods

9 February 2022
Yujia Jin
Aaron Sidford
Kevin Tian
ArXivPDFHTML
Abstract

We design accelerated algorithms with improved rates for several fundamental classes of optimization problems. Our algorithms all build upon techniques related to the analysis of primal-dual extragradient methods via relative Lipschitzness proposed recently by [CST21]. (1) Separable minimax optimization. We study separable minimax optimization problems min⁡xmax⁡yf(x)−g(y)+h(x,y)\min_x \max_y f(x) - g(y) + h(x, y)minx​maxy​f(x)−g(y)+h(x,y), where fff and ggg have smoothness and strong convexity parameters (Lx,μx)(L^x, \mu^x)(Lx,μx), (Ly,μy)(L^y, \mu^y)(Ly,μy), and hhh is convex-concave with a (Λxx,Λxy,Λyy)(\Lambda^{xx}, \Lambda^{xy}, \Lambda^{yy})(Λxx,Λxy,Λyy)-blockwise operator norm bounded Hessian. We provide an algorithm with gradient query complexity O~(Lxμx+Lyμy+Λxxμx+Λxyμxμy+Λyyμy)\tilde{O}\left(\sqrt{\frac{L^{x}}{\mu^{x}}} + \sqrt{\frac{L^{y}}{\mu^{y}}} + \frac{\Lambda^{xx}}{\mu^{x}} + \frac{\Lambda^{xy}}{\sqrt{\mu^{x}\mu^{y}}} + \frac{\Lambda^{yy}}{\mu^{y}}\right)O~(μxLx​​+μyLy​​+μxΛxx​+μxμy​Λxy​+μyΛyy​). Notably, for convex-concave minimax problems with bilinear coupling (e.g.\ quadratics), where Λxx=Λyy=0\Lambda^{xx} = \Lambda^{yy} = 0Λxx=Λyy=0, our rate matches a lower bound of [ZHZ19]. (2) Finite sum optimization. We study finite sum optimization problems min⁡x1n∑i∈[n]fi(x)\min_x \frac{1}{n}\sum_{i\in[n]} f_i(x)minx​n1​∑i∈[n]​fi​(x), where each fif_ifi​ is LiL_iLi​-smooth and the overall problem is μ\muμ-strongly convex. We provide an algorithm with gradient query complexity O~(n+∑i∈[n]Linμ)\tilde{O}\left(n + \sum_{i\in[n]} \sqrt{\frac{L_i}{n\mu}} \right)O~(n+∑i∈[n]​nμLi​​​). Notably, when the smoothness bounds {Li}i∈[n]\{L_i\}_{i\in[n]}{Li​}i∈[n]​ are non-uniform, our rate improves upon accelerated SVRG [LMH15, FGKS15] and Katyusha [All17] by up to a n\sqrt{n}n​ factor. (3) Minimax finite sums. We generalize our algorithms for minimax and finite sum optimization to solve a natural family of minimax finite sum optimization problems at an accelerated rate, encapsulating both above results up to a logarithmic factor.

View on arXiv
Comments on this paper