0

UCD: Unconditional Discriminator Promotes Nash Equilibrium in GANs

Main:9 Pages
5 Figures
Bibliography:3 Pages
4 Tables
Appendix:6 Pages
Abstract

Adversarial training turns out to be the key to one-step generation, especially for Generative Adversarial Network (GAN) and diffusion model distillation. Yet in practice, GAN training hardly converges properly and struggles in mode collapse. In this work, we quantitatively analyze the extent of Nash equilibrium in GAN training, and conclude that redundant shortcuts by inputting condition in DD disables meaningful knowledge extraction. We thereby propose to employ an unconditional discriminator (UCD), in which DD is enforced to extract more comprehensive and robust features with no condition injection. In this way, DD is able to leverage better knowledge to supervise GG, which promotes Nash equilibrium in GAN literature. Theoretical guarantee on compatibility with vanilla GAN theory indicates that UCD can be implemented in a plug-in manner. Extensive experiments confirm the significant performance improvements with high efficiency. For instance, we achieved \textbf{1.47 FID} on the ImageNet-64 dataset, surpassing StyleGAN-XL and several state-of-the-art one-step diffusion models. The code will be made publicly available.

View on arXiv
Comments on this paper