ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.00524
38
0

End-To-End Learning of Gaussian Mixture Priors for Diffusion Sampler

1 March 2025
Denis Blessing
Xiaogang Jia
Gerhard Neumann
    DiffM
ArXivPDFHTML
Abstract

Diffusion models optimized via variational inference (VI) have emerged as a promising tool for generating samples from unnormalized target densities. These models create samples by simulating a stochastic differential equation, starting from a simple, tractable prior, typically a Gaussian distribution. However, when the support of this prior differs greatly from that of the target distribution, diffusion models often struggle to explore effectively or suffer from large discretization errors. Moreover, learning the prior distribution can lead to mode-collapse, exacerbated by the mode-seeking nature of reverse Kullback-Leibler divergence commonly used in VI. To address these challenges, we propose end-to-end learnable Gaussian mixture priors (GMPs). GMPs offer improved control over exploration, adaptability to target support, and increased expressiveness to counteract mode collapse. We further leverage the structure of mixture models by proposing a strategy to iteratively refine the model by adding mixture components during training. Our experimental results demonstrate significant performance improvements across a diverse range of real-world and synthetic benchmark problems when using GMPs without requiring additional target evaluations.

View on arXiv
@article{blessing2025_2503.00524,
  title={ End-To-End Learning of Gaussian Mixture Priors for Diffusion Sampler },
  author={ Denis Blessing and Xiaogang Jia and Gerhard Neumann },
  journal={arXiv preprint arXiv:2503.00524},
  year={ 2025 }
}
Comments on this paper