Generative Uncertainty in Diffusion Models

Diffusion models have recently driven significant breakthroughs in generative modeling. While state-of-the-art models produce high-quality samples on average, individual samples can still be low quality. Detecting such samples without human inspection remains a challenging task. To address this, we propose a Bayesian framework for estimating generative uncertainty of synthetic samples. We outline how to make Bayesian inference practical for large, modern generative models and introduce a new semantic likelihood (evaluated in the latent space of a feature extractor) to address the challenges posed by high-dimensional sample spaces. Through our experiments, we demonstrate that the proposed generative uncertainty effectively identifies poor-quality samples and significantly outperforms existing uncertainty-based methods. Notably, our Bayesian framework can be applied post-hoc to any pretrained diffusion or flow matching model (via the Laplace approximation), and we propose simple yet effective techniques to minimize its computational overhead during sampling.
View on arXiv@article{jazbec2025_2502.20946, title={ Generative Uncertainty in Diffusion Models }, author={ Metod Jazbec and Eliot Wong-Toi and Guoxuan Xia and Dan Zhang and Eric Nalisnick and Stephan Mandt }, journal={arXiv preprint arXiv:2502.20946}, year={ 2025 } }