2
v1v2 (latest)

DuoGen: Towards General Purpose Interleaved Multimodal Generation

Min Shi
Xiaohui Zeng
Jiannan Huang
Yin Cui
Francesco Ferroni
Jialuo Li
Shubham Pachori
Zhaoshuo Li
Yogesh Balaji
Haoxiang Wang
Tsung-Yi Lin
Xiao Fu
Yue Zhao
Chieh-Yun Chen
Ming-Yu Liu
Humphrey Shi
Main:8 Pages
18 Figures
Bibliography:3 Pages
11 Tables
Appendix:23 Pages
Abstract

Interleaved multimodal generation enables capabilities beyond unimodal generation models, such as step-by-step instructional guides, visual planning, and generating visual drafts for reasoning. However, the quality of existing interleaved generation models under general instructions remains limited by insufficient training data and base model capacity. We present DuoGen, a general-purpose interleaved generation framework that systematically addresses data curation, architecture design, and evaluation. On the data side, we build a large-scale, high-quality instruction-tuning dataset by combining multimodal conversations rewritten from curated raw websites, and diverse synthetic examples covering everyday scenarios. Architecturally, DuoGen leverages the strong visual understanding of a pretrained multimodal LLM and the visual generation capabilities of a diffusion transformer (DiT) pretrained on video generation, avoiding costly unimodal pretraining and enabling flexible base model selection. A two-stage decoupled strategy first instruction-tunes the MLLM, then aligns DiT with it using curated interleaved image-text sequences. Across public and newly proposed benchmarks, DuoGen outperforms prior open-source models in text quality, image fidelity, and image-context alignment, and also achieves state-of-the-art performance on text-to-image and image editing among unified generation models. Data and code will be released atthis https URL.

View on arXiv
Comments on this paper