ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.08929
  4. Cited By
KALE Flow: A Relaxed KL Gradient Flow for Probabilities with Disjoint
  Support

KALE Flow: A Relaxed KL Gradient Flow for Probabilities with Disjoint Support

16 June 2021
Pierre Glaser
Michael Arbel
A. Gretton
ArXivPDFHTML

Papers citing "KALE Flow: A Relaxed KL Gradient Flow for Probabilities with Disjoint Support"

5 / 5 papers shown
Title
Ultra-fast feature learning for the training of two-layer neural networks in the two-timescale regime
Ultra-fast feature learning for the training of two-layer neural networks in the two-timescale regime
Raphael Barboni
Gabriel Peyré
François-Xavier Vialard
MLT
34
0
0
25 Apr 2025
DDEQs: Distributional Deep Equilibrium Models through Wasserstein Gradient Flows
DDEQs: Distributional Deep Equilibrium Models through Wasserstein Gradient Flows
Jonathan Geuter
Clément Bonet
Anna Korba
David Alvarez-Melis
56
0
0
03 Mar 2025
Deep MMD Gradient Flow without adversarial training
Deep MMD Gradient Flow without adversarial training
Alexandre Galashov
Valentin De Bortoli
Arthur Gretton
DiffM
37
7
0
10 May 2024
Wasserstein Gradient Flows for Moreau Envelopes of f-Divergences in Reproducing Kernel Hilbert Spaces
Wasserstein Gradient Flows for Moreau Envelopes of f-Divergences in Reproducing Kernel Hilbert Spaces
Viktor Stein
Sebastian Neumayer
Gabriele Steidl
Nicolaj Rux
48
9
0
07 Feb 2024
Efficient Gradient Flows in Sliced-Wasserstein Space
Efficient Gradient Flows in Sliced-Wasserstein Space
Clément Bonet
Nicolas Courty
Franccois Septier
Lucas Drumetz
29
21
0
21 Oct 2021
1