306
v1v2 (latest)

Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave Saddle Point Problems without Strong Convexity

Abstract

We consider the convex-concave saddle point problem minxmaxyf(x)+yAxg(y)\min_{x}\max_{y} f(x)+y^\top A x-g(y) where ff is smooth and convex and gg is smooth and strongly convex. We prove that if the coupling matrix AA has full column rank, the vanilla primal-dual gradient method can achieve linear convergence even if ff is not strongly convex. Our result generalizes previous work which either requires ff and gg to be quadratic functions or requires proximal mappings for both ff and gg. We adopt a novel analysis technique that in each iteration uses a "ghost" update as a reference, and show that the iterates in the primal-dual gradient method converge to this "ghost" sequence. Using the same technique we further give an analysis for the primal-dual stochastic variance reduced gradient (SVRG) method for convex-concave saddle point problems with a finite-sum structure.

View on arXiv
Comments on this paper