-GAN: Convergence and Estimation Guarantees

Abstract
We prove a two-way correspondence between the min-max optimization of general CPE loss function GANs and the minimization of associated -divergences. We then focus on -GAN, defined via the -loss, which interpolates several GANs (Hellinger, vanilla, Total Variation) and corresponds to the minimization of the Arimoto divergence. We show that the Arimoto divergences induced by -GAN equivalently converge, for all . However, under restricted learning models and finite samples, we provide estimation bounds which indicate diverse GAN behavior as a function of . Finally, we present empirical results on a toy dataset that highlight the practical utility of tuning the hyperparameter.
View on arXivComments on this paper