ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.11901
  4. Cited By
Combining Wasserstein-1 and Wasserstein-2 proximals: robust manifold
  learning via well-posed generative flows

Combining Wasserstein-1 and Wasserstein-2 proximals: robust manifold learning via well-posed generative flows

16 July 2024
Hyemin Gu
M. Katsoulakis
Luc Rey-Bellet
Benjamin J. Zhang
ArXivPDFHTML

Papers citing "Combining Wasserstein-1 and Wasserstein-2 proximals: robust manifold learning via well-posed generative flows"

4 / 4 papers shown
Title
OT-Transformer: A Continuous-time Transformer Architecture with Optimal Transport Regularization
OT-Transformer: A Continuous-time Transformer Architecture with Optimal Transport Regularization
Kelvin Kan
Xingjian Li
Stanley Osher
89
2
0
30 Jan 2025
Gradient flow in parameter space is equivalent to linear interpolation
  in output space
Gradient flow in parameter space is equivalent to linear interpolation in output space
Thomas Chen
Patrícia Muñoz Ewald
17
1
0
02 Aug 2024
Wasserstein proximal operators describe score-based generative models
  and resolve memorization
Wasserstein proximal operators describe score-based generative models and resolve memorization
Benjamin J. Zhang
Siting Liu
Wuchen Li
M. Katsoulakis
Stanley J. Osher
DiffM
30
8
0
09 Feb 2024
Function-space regularized Rényi divergences
Function-space regularized Rényi divergences
Jeremiah Birrell
Yannis Pantazis
P. Dupuis
M. Katsoulakis
Luc Rey-Bellet
32
6
0
10 Oct 2022
1