626
v1v2v3 (latest)

USP: Unified Self-Supervised Pretraining for Image Generation and Understanding

Main:8 Pages
8 Figures
Bibliography:4 Pages
15 Tables
Appendix:4 Pages
Abstract

Recent studies have highlighted the interplay between diffusion models and representation learning. Intermediate representations from diffusion models can be leveraged for downstream visual tasks, while self-supervised vision models can enhance the convergence and generation quality of diffusion models. However, transferring pretrained weights from vision models to diffusion models is challenging due to input mismatches and the use of latent spaces. To address these challenges, we propose Unified Self-supervised Pretraining (USP), a framework that initializes diffusion models via masked latent modeling in a Variational Autoencoder (VAE) latent space. USP achieves comparable performance in understanding tasks while significantly improving the convergence speed and generation quality of diffusion models. Our code will be publicly available atthis https URL.

View on arXiv
Comments on this paper