Varying k-Lipschitz Constraint for Generative Adversarial Networks
- GAN
Abstract
Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recent proposed Wasserstein GAN with gradient penalty (WGAN-GP) makes progress toward stable training. Gradient penalty acts as the role of enforcing a Lipschitz constraint. Further investigation on gradient penalty shows that gradient penalty may impose restriction on the capacity of discriminator. As a replacement, we introduce varying k-Lipschitz constraint. Proposed varying k-Lipschitz constraint witness better image quality and significantly improved training speed when testing on GAN architecture. Besides, we introduce an effective convergence measure, which correlates well with image quality.
View on arXivComments on this paper
