64
1

MINT: Multi-modal Chain of Thought in Unified Generative Models for Enhanced Image Generation

Abstract

Unified generative models have demonstrated extraordinary performance in both text and image generation. However, they tend to underperform when generating intricate images with various interwoven conditions, which is hard to solely rely on straightforward text-to-image generation. In response to this challenge, we introduce MINT, an innovative unified generative model, empowered with native multimodal chain of thought (MCoT) for enhanced image generation for the first time. Firstly, we design Mixture of Transformer Experts (MTXpert), an expert-parallel structure that effectively supports both natural language generation (NLG) and visual capabilities, while avoiding potential modality conflicts that could hinder the full potential of each modality. Building on this, we propose an innovative MCoT training paradigm, a step-by-step approach to multimodal thinking, reasoning, and reflection specifically designed to enhance image generation. This paradigm equips MINT with nuanced, element-wise decoupled alignment and a comprehensive understanding of textual and visual components. Furthermore, it fosters advanced multimodal reasoning and self-reflection, enabling the construction of images that are firmly grounded in the logical relationships between these elements. Notably, MINT has been validated to exhibit superior performance across multiple benchmarks for text-to-image (T2I) and image-to-text (I2T) tasks.

View on arXiv
@article{wang2025_2503.01298,
  title={ MINT: Multi-modal Chain of Thought in Unified Generative Models for Enhanced Image Generation },
  author={ Yi Wang and Mushui Liu and Wanggui He and Longxiang Zhang and Ziwei Huang and Guanghao Zhang and Fangxun Shu and Zhong Tao and Dong She and Zhelun Yu and Haoyuan Li and Weilong Dai and Mingli Song and Jie Song and Hao Jiang },
  journal={arXiv preprint arXiv:2503.01298},
  year={ 2025 }
}
Comments on this paper