ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.01504
45
119

Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave Saddle Point Problems without Strong Convexity

5 February 2018
S. Du
Wei Hu
ArXivPDFHTML
Abstract

We consider the convex-concave saddle point problem min⁡xmax⁡yf(x)+y⊤Ax−g(y)\min_{x}\max_{y} f(x)+y^\top A x-g(y)minx​maxy​f(x)+y⊤Ax−g(y) where fff is smooth and convex and ggg is smooth and strongly convex. We prove that if the coupling matrix AAA has full column rank, the vanilla primal-dual gradient method can achieve linear convergence even if fff is not strongly convex. Our result generalizes previous work which either requires fff and ggg to be quadratic functions or requires proximal mappings for both fff and ggg. We adopt a novel analysis technique that in each iteration uses a "ghost" update as a reference, and show that the iterates in the primal-dual gradient method converge to this "ghost" sequence. Using the same technique we further give an analysis for the primal-dual stochastic variance reduced gradient (SVRG) method for convex-concave saddle point problems with a finite-sum structure.

View on arXiv
Comments on this paper