ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.10646
47
6
v1v2v3 (latest)

Increasing Iterate Averaging for Solving Saddle-Point Problems

26 March 2019
Yuan Gao
Christian Kroer
Donald Goldfarb
ArXiv (abs)PDFHTML
Abstract

Many problems in machine learning and game theory can be formulated as saddle-point problems, for which various first-order methods have been developed and proven efficient in practice. Under the general convex-concave assumption, most first-order methods only guarantee ergodic convergence, that is, convergence of the uniform averages of the iterates. However, numerically, the iterates themselves can sometimes converge much faster than the uniform averages. This observation motivates increasing averaging schemes that put more weight on later iterates, in contrast to the usual uniform averaging. We show that such increasing averaging schemes, applied to various first-order methods, are able to preserve the convergence of the averaged iterates with no additional assumptions or computational overhead. Extensive numerical experiments on various equilibrium computation and image denoising problems demonstrate the effectiveness of the increasing averaging schemes. In particular, the increasing averages consistently outperform the uniform averages in all test problems by orders of magnitude. When solving matrix games and extensive-form games, increasing averages consistently outperform the last iterate as well. For matrix games, a first-order method equipped with increasing averaging outperforms the highly competitive CFR+^++ algorithm.

View on arXiv
Comments on this paper