36
0

Learning with Differentially Private (Sliced) Wasserstein Gradients

Abstract

In this work, we introduce a novel framework for privately optimizing objectives that rely on Wasserstein distances between data-dependent empirical measures. Our main theoretical contribution is, based on an explicit formulation of the Wasserstein gradient in a fully discrete setting, a control on the sensitivity of this gradient to individual data points, allowing strong privacy guarantees at minimal utility cost. Building on these insights, we develop a deep learning approach that incorporates gradient and activations clipping, originally designed for DP training of problems with a finite-sum structure. We further demonstrate that privacy accounting methods extend to Wasserstein-based objectives, facilitating large-scale private training. Empirical results confirm that our framework effectively balances accuracy and privacy, offering a theoretically sound solution for privacy-preserving machine learning tasks relying on optimal transport distances such as Wasserstein distance or sliced-Wasserstein distance.

View on arXiv
@article{lalanne2025_2502.01701,
  title={ Learning with Differentially Private (Sliced) Wasserstein Gradients },
  author={ Clément Lalanne and Jean-Michel Loubes and David Rodríguez-Vítores },
  journal={arXiv preprint arXiv:2502.01701},
  year={ 2025 }
}
Comments on this paper