36

Multi-Subspace Multi-Modal Modeling for Diffusion Models: Estimation, Convergence and Mixture of Experts

Ruofeng Yang
Yongcan Li
Bo Jiang
Cheng Chen
Shuai Li
Main:10 Pages
5 Figures
Bibliography:3 Pages
Appendix:24 Pages
Abstract

Recently, diffusion models have achieved a great performance with a small dataset of size nn and a fast optimization process. However, the estimation error of diffusion models suffers from the curse of dimensionality n1/Dn^{-1/D} with the data dimension DD. Since images are usually a union of low-dimensional manifolds, current works model the data as a union of linear subspaces with Gaussian latent and achieve a 1/n1/\sqrt{n} bound. Though this modeling reflects the multi-manifold property, the Gaussian latent can not capture the multi-modal property of the latent manifold. To bridge this gap, we propose the mixture subspace of low-rank mixture of Gaussian (MoLR-MoG) modeling, which models the target data as a union of KK linear subspaces, and each subspace admits a mixture of Gaussian latent (nkn_k modals with dimension dkd_k). With this modeling, the corresponding score function naturally has a mixture of expert (MoE) structure, captures the multi-modal information, and contains nonlinear property. We first conduct real-world experiments to show that the generation results of MoE-latent MoG NN are much better than MoE-latent Gaussian score. Furthermore, MoE-latent MoG NN achieves a comparable performance with MoE-latent Unet with 10×10 \times parameters. These results indicate that the MoLR-MoG modeling is reasonable and suitable for real-world data. After that, based on such MoE-latent MoG score, we provide a R4Σk=1KnkΣk=1Knkdk/nR^4\sqrt{\Sigma_{k=1}^Kn_k}\sqrt{\Sigma_{k=1}^Kn_kd_k}/\sqrt{n} estimation error, which escapes the curse of dimensionality by using data structure. Finally, we study the optimization process and prove the convergence guarantee under the MoLR-MoG modeling. Combined with these results, under a setting close to real-world data, this work explains why diffusion models only require a small training sample and enjoy a fast optimization process to achieve a great performance.

View on arXiv
Comments on this paper