ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.05953
61
8
v1v2v3 (latest)

(f,Γ)(f,Γ)(f,Γ)-Divergences: Interpolating between fff-Divergences and Integral Probability Metrics

11 November 2020
Jeremiah Birrell
P. Dupuis
Markos A. Katsoulakis
Yannis Pantazis
Luc Rey-Bellet
ArXiv (abs)PDFHTML
Abstract

We develop a rigorous and general framework for constructing information-theoretic divergences that subsume both fff-divergences and integral probability metrics (IPMs), such as the 111-Wasserstein distance. We prove under which assumptions these divergences, hereafter referred to as (f,Γ)(f,\Gamma)(f,Γ)-divergences, provide a notion of `distance' between probability measures and show that they can be expressed as a two-stage mass-redistribution/mass-transport process. The (f,Γ)(f,\Gamma)(f,Γ)-divergences inherit features from IPMs, such as the ability to compare distributions which are not absolutely continuous, as well as from fff-divergences, namely the strict concavity of their variational representations and the ability to control heavy-tailed distributions for particular choices of fff. When combined, these features establish a divergence with improved properties for estimation, statistical learning, and uncertainty quantification applications. Using statistical learning as an example, we demonstrate their advantage in training generative adversarial networks (GANs) for heavy-tailed, not-absolutely continuous sample distributions and we also show improved performance and stability over gradient-penalized Wasserstein GAN in image generation.

View on arXiv
Comments on this paper