453

Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities

International Journal of Computer Vision (IJCV), 2017
Guo-Jun Qi
Abstract

Updates: 1) we use the non-parametric analysis of the optimal loss function to show how the vanishing gradient problem can be addressed by the LS-GAN; 2) better classification accuracy is reported; 3) the LS-GAN is robust against the architecture changes, and we observed no mode collapse occurs even if the batch normalization layers are removed; 4) under the same Lipschitz regularity, the Wasserstein GAN (WGAN) is consistency with the data density. Abstract: We present a novel Loss-Sensitive GAN (LS-GAN) that learns a loss function to separate generated samples from their real examples. An important property of the LS-GAN is it allows the generator to focus on improving poor data points that are far apart from real examples rather than wasting efforts on those samples that have already been well generated, and thus can improve the overall quality of generated samples. The theoretical analysis also shows that the LS-GAN can generate samples following the true data density. In particular, we present a regularity condition on the underlying data density, which allows us to use a class of Lipschitz losses and generators to model the LS-GAN. It relaxes the assumption that the classic GAN should have infinite modeling capacity to obtain the similar theoretical guarantee. Furthermore, we derive a non-parametric solution that characterizes the upper and lower bounds of the losses learned by the LS-GAN, both of which are piecewise linear and have non-vanishing gradient almost everywhere. Therefore, there should be sufficient gradient to update the generator of the LS-GAN even if the loss function is optimized, relieving the vanishing gradient problem in the classic GAN and making it easier to train the LS-GAN generator.

View on arXiv
Comments on this paper