Unified multimodal models aim to integrate understanding (text output) and generation (pixel output), but aligning these different modalities within a single architecture often demands complex training recipes and careful data balancing. We introduce MetaQueries, a set of learnable queries that act as an efficient interface between autoregressive multimodal LLMs (MLLMs) and diffusion models. MetaQueries connects the MLLM's latents to the diffusion decoder, enabling knowledge-augmented image generation by leveraging the MLLM's deep understanding and reasoning capabilities. Our method simplifies training, requiring only paired image-caption data and standard diffusion objectives. Notably, this transfer is effective even when the MLLM backbone remains frozen, thereby preserving its state-of-the-art multimodal understanding capabilities while achieving strong generative performance. Additionally, our method is flexible and can be easily instruction-tuned for advanced applications such as image editing and subject-driven generation.
View on arXiv@article{pan2025_2504.06256, title={ Transfer between Modalities with MetaQueries }, author={ Xichen Pan and Satya Narayan Shukla and Aashu Singh and Zhuokai Zhao and Shlok Kumar Mishra and Jialiang Wang and Zhiyang Xu and Jiuhai Chen and Kunpeng Li and Felix Juefei-Xu and Ji Hou and Saining Xie }, journal={arXiv preprint arXiv:2504.06256}, year={ 2025 } }