301

On catastrophic forgetting in Generative Adversarial Networks

Abstract

We view the training of Generative Adversarial Networks (GANs) as a continual learning problem. The sequence of generated distributions is considered as the sequence of tasks to the discriminator. We show that catastrophic forgetting is present in GANs and how it can make the training of GANs non-convergent. We then provide a theoretical analysis of the problem. To prevent catastrophic forgetting, we propose a way to adapt continual learning techniques to GANs. Our method is orthogonal to existing GAN training techniques and can be added to existing GANs without any architectural modification. Experiments on synthetic and real-world datasets confirm that the proposed method alleviates the catastrophic forgetting problem and improves the convergence of GANs.

View on arXiv
Comments on this paper