28
13

Learning Mixtures of Gaussians Using Diffusion Models

Abstract

We give a new algorithm for learning mixtures of kk Gaussians (with identity covariance in Rn\mathbb{R}^n) to TV error ε\varepsilon, with quasi-polynomial (O(npolylog(n+kε))O(n^{\text{poly\,log}\left(\frac{n+k}{\varepsilon}\right)})) time and sample complexity, under a minimum weight assumption. Our results extend to continuous mixtures of Gaussians where the mixing distribution is supported on a union of kk balls of constant radius. In particular, this applies to the case of Gaussian convolutions of distributions on low-dimensional manifolds, or more generally sets with small covering number, for which no sub-exponential algorithm was previously known. Unlike previous approaches, most of which are algebraic in nature, our approach is analytic and relies on the framework of diffusion models. Diffusion models are a modern paradigm for generative modeling, which typically rely on learning the score function (gradient log-pdf) along a process transforming a pure noise distribution, in our case a Gaussian, to the data distribution. Despite their dazzling performance in tasks such as image generation, there are few end-to-end theoretical guarantees that they can efficiently learn nontrivial families of distributions; we give some of the first such guarantees. We proceed by deriving higher-order Gaussian noise sensitivity bounds for the score functions for a Gaussian mixture to show that that they can be inductively learned using piecewise polynomial regression (up to poly-logarithmic degree), and combine this with known convergence results for diffusion models.

View on arXiv
@article{gatmiry2025_2404.18869,
  title={ Learning Mixtures of Gaussians Using Diffusion Models },
  author={ Khashayar Gatmiry and Jonathan Kelner and Holden Lee },
  journal={arXiv preprint arXiv:2404.18869},
  year={ 2025 }
}
Comments on this paper