On catastrophic forgetting in Generative Adversarial Networks
- GAN
In this paper, we view the training of Generative Adversarial Networks (GANs) as a continual learning problem. The sequence of generated distributions is considered as the sequence of tasks. We show that catastrophic forgetting is present in GANs. We show how catastrophic forgetting can make the training of GANs non-convergent and provide a theoretical analysis of the problem. To alleviate catastrophic forgetting, we propose a way to adapt continual learning (CL) techniques to GANs. Our method is orthogonal to existing GANs training techniques and can be added to existing GANs without any architectural modification. Experiments on synthetic and real-world datasets confirm that the proposed method alleviates the catastrophic forgetting problem and improves the convergence of GANs.
View on arXiv