291
v1v2v3 (latest)

M2^{2}Chat: Empowering VLM for Multimodal LLM Interleaved Text-Image Generation

Main:8 Pages
6 Figures
Bibliography:3 Pages
3 Tables
Abstract

While current LLM chatbots like GPT-4V bridge the gap between human instructions and visual representations to enable text-image generations, they still lack efficient alignment methods for high-fidelity performance on multiple downstream tasks. In this paper, we propose \textbf{M2ChatM^{2}Chat}, a novel unified multimodal LLM framework for generating interleaved text-image conversation across various scenarios. Specifically, we propose an M3AdapterM^{3}Adapter that efficiently integrates granular low-level visual information and high-level semantic features from multi-modality prompts. Upon the well-aligned fused feature, M3AdapterM^{3}Adapter tailors a learnable gating strategy to balance the model creativity and consistency across various tasks adaptively. Moreover, to further enhance the effectiveness of M3AdapterM^{3}Adapter while preserving the coherence of semantic context comprehension, we introduce a two-stage M3FTM^{3}FT fine-tuning strategy. This strategy optimizes disjoint groups of parameters for image-text alignment and visual-instruction respectively. Extensive experiments demonstrate our M2ChatM^{2}Chat surpasses state-of-the-art counterparts across diverse benchmarks, showcasing its prowess in interleaving generation, storytelling, and multimodal dialogue systems. The demo and code are available at \red{this https URL}.

View on arXiv
Comments on this paper