422

Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities

International Journal of Computer Vision (IJCV), 2017
Guo-Jun Qi
Abstract

Updates: Now we show a Generalized LS-GAN (GLS-GAN) in Appendix D, where both LS-GAN and WGAN are two special cases of this GLS-GAN! This leads to a super familiy of GLS-GANs between these two extreme examples of GAN models that both have delivered impressive results. This makes us believe there must be an even more sweet spot among these newly discovered GLS-GANs, deserving further theory and algorithm studies that we leave to future. Abstract: We present a novel Loss-Sensitive GAN (LS-GAN) that learns a loss function to separate generated samples from their real examples. An important property of the LS-GAN is it allows the generator to focus on improving poor data points that are far apart from real examples rather than wasting efforts on those samples that have already been well generated, and thus can improve the overall quality of generated samples. The theoretical analysis also shows that the LS-GAN can generate samples following the true data density. In particular, we present a regularity condition on the underlying data density, which allows us to use a class of Lipschitz losses and generators to model the LS-GAN. It relaxes the assumption that the classic GAN should have infinite modeling capacity to obtain the similar theoretical guarantee. Furthermore, we derive a non-parametric solution that characterizes the upper and lower bounds of the losses learned by the LS-GAN, both of which are piecewise linear and have non-vanishing gradient almost everywhere. Therefore, there should be sufficient gradient to update the generator of the LS-GAN even if the loss function is optimized, relieving the vanishing gradient problem in the classic GAN and making it easier to train the LS-GAN generator.

View on arXiv
Comments on this paper