373
v1v2v3 (latest)

DiffBlender: Composable and Versatile Multimodal Text-to-Image Diffusion Models

Expert systems with applications (ESWA), 2023
Main:16 Pages
20 Figures
Bibliography:3 Pages
7 Tables
Abstract

In this study, we aim to enhance the capabilities of diffusion-based text-to-image (T2I) generation models by integrating diverse modalities beyond textual descriptions within a unified framework. To this end, we categorize widely used conditional inputs into three modality types: structure, layout, and attribute. We propose a multimodal T2I diffusion model, which is capable of processing all three modalities within a single architecture without modifying the parameters of the pre-trained diffusion model, as only a small subset of components is updated. Our approach sets new benchmarks in multimodal generation through extensive quantitative and qualitative comparisons with existing conditional generation methods. We demonstrate that DiffBlender effectively integrates multiple sources of information and supports diverse applications in detailed image synthesis. The code and demo are available atthis https URL.

View on arXiv
Comments on this paper