ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.14320
17
5

(αD,αG)(α_D,α_G)(αD​,αG​)-GANs: Addressing GAN Training Instabilities via Dual Objectives

28 February 2023
Monica Welfert
Kyle Otstot
Gowtham R. Kurri
Lalitha Sankar
ArXivPDFHTML
Abstract

In an effort to address the training instabilities of GANs, we introduce a class of dual-objective GANs with different value functions (objectives) for the generator (G) and discriminator (D). In particular, we model each objective using α\alphaα-loss, a tunable classification loss, to obtain (αD,αG)(\alpha_D,\alpha_G)(αD​,αG​)-GANs, parameterized by (αD,αG)∈(0,∞]2(\alpha_D,\alpha_G)\in (0,\infty]^2(αD​,αG​)∈(0,∞]2. For sufficiently large number of samples and capacities for G and D, we show that the resulting non-zero sum game simplifies to minimizing an fff-divergence under appropriate conditions on (αD,αG)(\alpha_D,\alpha_G)(αD​,αG​). In the finite sample and capacity setting, we define estimation error to quantify the gap in the generator's performance relative to the optimal setting with infinite samples and obtain upper bounds on this error, showing it to be order optimal under certain conditions. Finally, we highlight the value of tuning (αD,αG)(\alpha_D,\alpha_G)(αD​,αG​) in alleviating training instabilities for the synthetic 2D Gaussian mixture ring and the Stacked MNIST datasets.

View on arXiv
Comments on this paper