Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities
- GAN
This paper presents a novel loss-sensitive generative adversarial net (LS-GAN). Compared with the classic GAN that uses a dyadic classification of real and generated samples to train the discriminator, we learn a loss function that can generate samples with the constraint that a real example should have a smaller loss than a generated sample. This results in a novel paradigm of loss-sensitive GAN (LS-GAN), as well as a conditional derivative that can generate samples satisfying specified conditions by properly defining a suitable loss function. The theoretical analysis shows that the LS-GAN can generate samples following the true data density we wish to estimate. In particular, we focus on a large family of Lipschitz densities for the underlying data distribution, allowing us to use a class of Lipschitz losses and generators to model the LS-GAN. This relaxes the assumption on the classic GANs that the model should have infinite modeling capacity to obtain the similar theoretical guarantee. This provides a principled way to regularize a family of deep generative models with the proposed LS-GAN criterion, preventing them from being overfitted to duplicate few training examples. Furthermore, we derive a non-parametric solution that characterizes the upper and lower bounds of the losses learned by the LS-GAN. We conduct experiments to evaluate the proposed LS-GAN on classification and generation tasks, and demonstrate the competitive performances as compared with the other state-of-the-art models.
View on arXiv