Adaptive Destruction Processes for Diffusion Samplers

This paper explores the challenges and benefits of a trainable destruction process in diffusion samplers -- diffusion-based generative models trained to sample an unnormalised density without access to data samples. Contrary to the majority of work that views diffusion samplers as approximations to an underlying continuous-time model, we view diffusion models as discrete-time policies trained to produce samples in very few generation steps. We propose to trade some of the elegance of the underlying theory for flexibility in the definition of the generative and destruction policies. In particular, we decouple the generation and destruction variances, enabling both transition kernels to be learned as unconstrained Gaussian densities. We show that, when the number of steps is limited, training both generation and destruction processes results in faster convergence and improved sampling quality on various benchmarks. Through a robust ablation study, we investigate the design choices necessary to facilitate stable training. Finally, we show the scalability of our approach through experiments on GAN latent space sampling for conditional image generation.
View on arXiv@article{gritsaev2025_2506.01541, title={ Adaptive Destruction Processes for Diffusion Samplers }, author={ Timofei Gritsaev and Nikita Morozov and Kirill Tamogashev and Daniil Tiapkin and Sergey Samsonov and Alexey Naumov and Dmitry Vetrov and Nikolay Malkin }, journal={arXiv preprint arXiv:2506.01541}, year={ 2025 } }