10
15

The First Optimal Algorithm for Smooth and Strongly-Convex-Strongly-Concave Minimax Optimization

D. Kovalev
Alexander Gasnikov
Abstract

In this paper, we revisit the smooth and strongly-convex-strongly-concave minimax optimization problem. Zhang et al. (2021) and Ibrahim et al. (2020) established the lower bound Ω(κxκylog1ϵ)\Omega\left(\sqrt{\kappa_x\kappa_y} \log \frac{1}{\epsilon}\right) on the number of gradient evaluations required to find an ϵ\epsilon-accurate solution, where κx\kappa_x and κy\kappa_y are condition numbers for the strong convexity and strong concavity assumptions. However, the existing state-of-the-art methods do not match this lower bound: algorithms of Lin et al. (2020) and Wang and Li (2020) have gradient evaluation complexity O(κxκylog31ϵ)\mathcal{O}\left( \sqrt{\kappa_x\kappa_y}\log^3\frac{1}{\epsilon}\right) and O(κxκylog3(κxκy)log1ϵ)\mathcal{O}\left( \sqrt{\kappa_x\kappa_y}\log^3 (\kappa_x\kappa_y)\log\frac{1}{\epsilon}\right), respectively. We fix this fundamental issue by providing the first algorithm with O(κxκylog1ϵ)\mathcal{O}\left(\sqrt{\kappa_x\kappa_y}\log\frac{1}{\epsilon}\right) gradient evaluation complexity. We design our algorithm in three steps: (i) we reformulate the original problem as a minimization problem via the pointwise conjugate function; (ii) we apply a specific variant of the proximal point algorithm to the reformulated problem; (iii) we compute the proximal operator inexactly using the optimal algorithm for operator norm reduction in monotone inclusions.

View on arXiv
Comments on this paper