Generalization error property of infoGAN for two-layer neural network

Information Maximizing Generative Adversarial Network (infoGAN) can be understood as a minimax problem involving two neural networks: discriminators and generators with mutual information functions. The infoGAN incorporates various components, including latent variables, mutual information, and objective function. This research demonstrates the Generalization error property of infoGAN as the discriminator and generator sample size approaches infinity. This research explores the generalization error property of InfoGAN as the sample sizes of the discriminator and generator approach infinity. To establish this property, the study considers the difference between the empirical and population versions of the objective function. The error bound is derived from the Rademacher complexity of the discriminator and generator function classes. Additionally, the bound is proven for a two-layer network, where both the discriminator and generator utilize Lipschitz and non-decreasing activation functions.
View on arXiv@article{hasan2025_2310.00443, title={ Generalization error property of infoGAN for two-layer neural network }, author={ Mahmud Hasan and Mathias Muia }, journal={arXiv preprint arXiv:2310.00443}, year={ 2025 } }