ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.09674
20
32

Generalized Optimistic Methods for Convex-Concave Saddle Point Problems

19 February 2022
Ruichen Jiang
Aryan Mokhtari
ArXivPDFHTML
Abstract

The optimistic gradient method has seen increasing popularity for solving convex-concave saddle point problems. To analyze its iteration complexity, a recent work [arXiv:1906.01115] proposed an interesting perspective that interprets this method as an approximation to the proximal point method. In this paper, we follow this approach and distill the underlying idea of optimism to propose a generalized optimistic method, which includes the optimistic gradient method as a special case. Our general framework can handle constrained saddle point problems with composite objective functions and can work with arbitrary norms using Bregman distances. Moreover, we develop a backtracking line search scheme to select the step sizes without knowledge of the smoothness coefficients. We instantiate our method with first-, second- and higher-order oracles and give best-known global iteration complexity bounds. For our first-order method, we show that the averaged iterates converge at a rate of O(1/N)O(1/N)O(1/N) when the objective function is convex-concave, and it achieves linear convergence when the objective is strongly-convex-strongly-concave. For our second- and higher-order methods, under the additional assumption that the distance-generating function has Lipschitz gradient, we prove a complexity bound of O(1/ϵ2p+1)O(1/\epsilon^\frac{2}{p+1})O(1/ϵp+12​) in the convex-concave setting and a complexity bound of O((LpDp−12/μ)2p+1+log⁡log⁡1ϵ)O((L_pD^\frac{p-1}{2}/\mu)^\frac{2}{p+1}+\log\log\frac{1}{\epsilon})O((Lp​D2p−1​/μ)p+12​+loglogϵ1​) in the strongly-convex-strongly-concave setting, where LpL_pLp​ (p≥2p\geq 2p≥2) is the Lipschitz constant of the ppp-th-order derivative, μ\muμ is the strong convexity parameter, and DDD is the initial Bregman distance to the saddle point. Moreover, our line search scheme provably only requires a constant number of calls to a subproblem solver per iteration on average, making our first- and second-order methods particularly amenable to implementation.

View on arXiv
Comments on this paper