ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.11463
15
23

Generative Sliced MMD Flows with Riesz Kernels

19 May 2023
J. Hertrich
Christian Wald
Fabian Altekrüger
Paul Hagemann
ArXivPDFHTML
Abstract

Maximum mean discrepancy (MMD) flows suffer from high computational costs in large scale computations. In this paper, we show that MMD flows with Riesz kernels K(x,y)=−∥x−y∥rK(x,y) = - \|x-y\|^rK(x,y)=−∥x−y∥r, r∈(0,2)r \in (0,2)r∈(0,2) have exceptional properties which allow their efficient computation. We prove that the MMD of Riesz kernels, which is also known as energy distance, coincides with the MMD of their sliced version. As a consequence, the computation of gradients of MMDs can be performed in the one-dimensional setting. Here, for r=1r=1r=1, a simple sorting algorithm can be applied to reduce the complexity from O(MN+N2)O(MN+N^2)O(MN+N2) to O((M+N)log⁡(M+N))O((M+N)\log(M+N))O((M+N)log(M+N)) for two measures with MMM and NNN support points. As another interesting follow-up result, the MMD of compactly supported measures can be estimated from above and below by the Wasserstein-1 distance. For the implementations we approximate the gradient of the sliced MMD by using only a finite number PPP of slices. We show that the resulting error has complexity O(d/P)O(\sqrt{d/P})O(d/P​), where ddd is the data dimension. These results enable us to train generative models by approximating MMD gradient flows by neural networks even for image applications. We demonstrate the efficiency of our model by image generation on MNIST, FashionMNIST and CIFAR10.

View on arXiv
Comments on this paper