385

DiffX: Guide Your Layout to Cross-Modal Generative Modeling

Main:11 Pages
16 Figures
Bibliography:2 Pages
Abstract

Diffusion models have made significant strides in language-driven and layout-driven image generation. However, most diffusion models are limited to visible RGB image generation. In fact, human perception of the world is enriched by diverse viewpoints, such as chromatic contrast, thermal illumination, and depth information. In this paper, we introduce a novel diffusion model for general layout-guided cross-modal generation, called DiffX. Notably, DiffX presents a simple yet effective cross-modal generative modeling pipeline, which conducts diffusion and denoising processes in the modality-shared latent space. Moreover, we introduce the Joint-Modality Embedder (JME) to enhance interaction between layout and text conditions by incorporating a gated attention mechanism. Meanwhile, the advanced Long-CLIP is employed for long caption embedding for user instruction. To facilitate the user-instructed generative training, we construct the cross-modal image datasets with detailed text captions assisted by the Large-Multimodal Model (LMM). Through extensive experiments, DiffX demonstrates robustness in cross-modal generation across three ``RGB+X'' datasets: FLIR, MFNet, and COME15K, guided by various layout conditions. It also shows the potential for the adaptive generation of ``RGB+X+Y+Z'' images or more diverse modalities on COME15K and MCXFace datasets. Our code and constructed cross-modal image datasets are available at https://github.com/zeyuwang-zju/DiffX.

View on arXiv
Comments on this paper