ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.13738
23
6

Improved Convergence Rate for Diffusion Probabilistic Models

17 October 2024
Gen Li
Yuchen Jiao
    DiffM
ArXivPDFHTML
Abstract

Score-based diffusion models have achieved remarkable empirical performance in the field of machine learning and artificial intelligence for their ability to generate high-quality new data instances from complex distributions. Improving our understanding of diffusion models, including mainly convergence analysis for such models, has attracted a lot of interests. Despite a lot of theoretical attempts, there still exists significant gap between theory and practice. Towards to close this gap, we establish an iteration complexity at the order of d1/3ε−2/3d^{1/3}\varepsilon^{-2/3}d1/3ε−2/3, which is better than d5/12ε−1d^{5/12}\varepsilon^{-1}d5/12ε−1, the best known complexity achieved before our work. This convergence analysis is based on a randomized midpoint method, which is first proposed for log-concave sampling (Shen and Lee, 2019), and then extended to diffusion models by Gupta et al. (2024). Our theory accommodates ε\varepsilonε-accurate score estimates, and does not require log-concavity on the target distribution. Moreover, the algorithm can also be parallelized to run in only O(log⁡2(d/ε))O(\log^2(d/\varepsilon))O(log2(d/ε)) parallel rounds in a similar way to prior works.

View on arXiv
Comments on this paper