Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities
- GAN
*Updates to previous version* 1) we use the non-parametric characterization of optimal loss function to analyze how the vanishing gradient in training the generator of the classic GAN can be addressed by the LS-GAN; 2) better classification accuracy is reported; 3) we unify Wasserstein GAN (WGAN) under the same Lipschitz regularity to prove its consistency with the underlying data density. Abstract: In this paper, we present a novel Loss-Sensitive GAN (LS-GAN) that learns a loss function to separate generated samples from their real examples. An important property of the LS-GAN is it allows the generator to focus on improving poor data points that are far apart from real examples rather than wasting efforts on those samples that have already been well generated, and thus can improve the overall quality of generated samples. The theoretical analysis also shows that the LS-GAN can generate samples following the true data density. In particular, we present a regularity condition on the underlying data density, which allows us to use a class of Lipschitz losses and generators to model the LS-GAN. It relaxes the assumption that the classic GAN should have infinite modeling capacity to obtain the similar theoretical guarantee. Furthermore, we derive a non-parametric solution that characterizes the upper and lower bounds of the losses learned by the LS-GAN, both of which are piecewise linear and have non-vanishing gradient almost everywhere. Therefore, there should be sufficient gradient to update the generator of the LS-GAN even if the loss function is optimized, relieving the vanishing gradient problem in the classic GAN and making it easier to train the LS-GAN generator. We also generalize the unsupervised LS-GAN to a conditional model generating samples based on given conditions, and show its applications in both supervised and semi-supervised learning problems.
View on arXiv