293
v1v2 (latest)

Faster Diffusion Models via Higher-Order Approximation

Main:38 Pages
Bibliography:4 Pages
Abstract

In this paper, we explore provable acceleration of diffusion models without any additional retraining. Focusing on the task of approximating a target data distribution in Rd\mathbb{R}^d to within ε\varepsilon total-variation distance, we propose a principled, training-free sampling algorithm that requires only the order ofd1+2/Kε1/K d^{1+2/K} \varepsilon^{-1/K} score function evaluations (up to log factor) in the presence of accurate scores, where K>0K>0 is an arbitrary fixed integer. This result applies to a broad class of target data distributions, without the need for assumptions such as smoothness or log-concavity. Our theory is robust vis-a-vis inexact score estimation, degrading gracefully as the score estimation error increases -- without demanding higher-order smoothness on the score estimates as assumed in previous work. The proposed algorithm draws insight from high-order ODE solvers, leveraging high-order Lagrange interpolation and successive refinement to approximate the integral derived from the probability flow ODE. More broadly, our work develops a theoretical framework towards understanding the efficacy of high-order methods for accelerated sampling.

View on arXiv
Comments on this paper