521

GANs with First-Order Greedy Discriminators

Abstract

We propose a variant of the standard min-max framework for GANs to learn a distribution, where the discriminator can update its strategy in a greedy manner until it reaches a first-order stationary point. We give an algorithm to train such a GAN and show that it provably converges from any initial point to an approximate local equilibrium for this framework. Our algorithm runs in time polynomial in the smoothness parameters of the loss function and independent of the dimension, and allows for loss functions that can be nonconvex and nonconcave in the parameters of the generator and discriminator. Empirically, GANs trained using our algorithm consistently learn a greater number of modes than gradient descent-ascent (GDA), optimistic mirror descent (OMD), and unrolled GANs when applied to a synthetic Gaussian mixture dataset. Moreover, they perform significantly better on CIFAR-10 than OMD and GDA when comparing the mean and standard deviation of the Inception Score respectively.

View on arXiv
Comments on this paper