ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.07820
17
0

Smoothed Distance Kernels for MMDs and Applications in Wasserstein Gradient Flows

10 April 2025
Nicolaj Rux
Michael Quellmalz
Gabriele Steidl
ArXivPDFHTML
Abstract

Negative distance kernels K(x,y):=−∥x−y∥K(x,y) := - \|x-y\|K(x,y):=−∥x−y∥ were used in the definition of maximum mean discrepancies (MMDs) in statistics and lead to favorable numerical results in various applications. In particular, so-called slicing techniques for handling high-dimensional kernel summations profit from the simple parameter-free structure of the distance kernel. However, due to its non-smoothness in x=yx=yx=y, most of the classical theoretical results, e.g. on Wasserstein gradient flows of the corresponding MMD functional do not longer hold true. In this paper, we propose a new kernel which keeps the favorable properties of the negative distance kernel as being conditionally positive definite of order one with a nearly linear increase towards infinity and a simple slicing structure, but is Lipschitz differentiable now. Our construction is based on a simple 1D smoothing procedure of the absolute value function followed by a Riemann-Liouville fractional integral transform. Numerical results demonstrate that the new kernel performs similarly well as the negative distance kernel in gradient descent methods, but now with theoretical guarantees.

View on arXiv
@article{rux2025_2504.07820,
  title={ Smoothed Distance Kernels for MMDs and Applications in Wasserstein Gradient Flows },
  author={ Nicolaj Rux and Michael Quellmalz and Gabriele Steidl },
  journal={arXiv preprint arXiv:2504.07820},
  year={ 2025 }
}
Comments on this paper