48
v1v2v3 (latest)

Resolving Memorization in Empirical Diffusion Model for Manifold Data in High-Dimensional Spaces

Main:32 Pages
2 Figures
Bibliography:4 Pages
Abstract

Diffusion models are popular tools for generating new data samples, using a forward process that adds noise to data and a reverse process to denoise and produce samples. However, when the data distribution consists of n points, empirical diffusion models tend to reproduce existing data points, a phenomenon known as the memorization effect. Current literature often addresses this with complex machine learning techniques. This work shows that the memorization issue can be solved simply by applying an inertia update at the end of the empirical diffusion simulation. Our inertial diffusion model requires only the empirical score function and no additional training. We demonstrate that the distribution of samples from this model approximates the true data distribution on a C2C^2 manifold of dimension dd, within a Wasserstein-1 distance of order O(n2d+4)O(n^{-\frac{2}{d+4}}). This bound significantly shrinks the Wasserstein distance between the population and empirical distributions, confirming that the inertial diffusion model produces new and diverse samples. Remarkably, this estimate is independent of the ambient space dimension, as no further training is needed. Our analysis shows that the inertial diffusion samples resemble Gaussian kernel density estimations on the manifold, revealing a novel connection between diffusion models and manifold learning.

View on arXiv
Comments on this paper