225
v1v2 (latest)

Realizing GANs via a Tunable Loss Function

Information Theory Workshop (ITW), 2021
Abstract

We introduce a tunable GAN, called α\alpha-GAN, parameterized by α(0,]\alpha \in (0,\infty], which interpolates between various ff-GANs and Integral Probability Metric based GANs (under constrained discriminator set). We construct α\alpha-GAN using a supervised loss function, namely, α\alpha-loss, which is a tunable loss function capturing several canonical losses. We show that α\alpha-GAN is intimately related to the Arimoto divergence, which was first proposed by \"{O}sterriecher (1996), and later studied by Liese and Vajda (2006). We also study the convergence properties of α\alpha-GAN. We posit that the holistic understanding that α\alpha-GAN introduces will have practical benefits of addressing both the issues of vanishing gradients and mode collapse.

View on arXiv
Comments on this paper