ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.11940
20
63

Towards Better Understanding of Adaptive Gradient Algorithms in Generative Adversarial Nets

26 December 2019
Mingrui Liu
Youssef Mroueh
Jerret Ross
Wei Zhang
Xiaodong Cui
Payel Das
Tianbao Yang
    ODL
ArXivPDFHTML
Abstract

Adaptive gradient algorithms perform gradient-based updates using the history of gradients and are ubiquitous in training deep neural networks. While adaptive gradient methods theory is well understood for minimization problems, the underlying factors driving their empirical success in min-max problems such as GANs remain unclear. In this paper, we aim at bridging this gap from both theoretical and empirical perspectives. First, we analyze a variant of Optimistic Stochastic Gradient (OSG) proposed in~\citep{daskalakis2017training} for solving a class of non-convex non-concave min-max problem and establish O(ϵ−4)O(\epsilon^{-4})O(ϵ−4) complexity for finding ϵ\epsilonϵ-first-order stationary point, in which the algorithm only requires invoking one stochastic first-order oracle while enjoying state-of-the-art iteration complexity achieved by stochastic extragradient method by~\citep{iusem2017extragradient}. Then we propose an adaptive variant of OSG named Optimistic Adagrad (OAdagrad) and reveal an \emph{improved} adaptive complexity O(ϵ−21−α)O\left(\epsilon^{-\frac{2}{1-\alpha}}\right)O(ϵ−1−α2​), where α\alphaα characterizes the growth rate of the cumulative stochastic gradient and 0≤α≤1/20\leq \alpha\leq 1/20≤α≤1/2. To the best of our knowledge, this is the first work for establishing adaptive complexity in non-convex non-concave min-max optimization. Empirically, our experiments show that indeed adaptive gradient algorithms outperform their non-adaptive counterparts in GAN training. Moreover, this observation can be explained by the slow growth rate of the cumulative stochastic gradient, as observed empirically.

View on arXiv
Comments on this paper