51
0

LDGen: Enhancing Text-to-Image Synthesis via Large Language Model-Driven Language Representation

Abstract

In this paper, we introduce LDGen, a novel method for integrating large language models (LLMs) into existing text-to-image diffusion models while minimizing computational demands. Traditional text encoders, such as CLIP and T5, exhibit limitations in multilingual processing, hindering image generation across diverse languages. We address these challenges by leveraging the advanced capabilities of LLMs. Our approach employs a language representation strategy that applies hierarchical caption optimization and human instruction techniques to derive precise semantic information,. Subsequently, we incorporate a lightweight adapter and a cross-modal refiner to facilitate efficient feature alignment and interaction between LLMs and image features. LDGen reduces training time and enables zero-shot multilingual image generation. Experimental results indicate that our method surpasses baseline models in both prompt adherence and image aesthetic quality, while seamlessly supporting multiple languages. Project page:this https URL.

View on arXiv
@article{li2025_2502.18302,
  title={ LDGen: Enhancing Text-to-Image Synthesis via Large Language Model-Driven Language Representation },
  author={ Pengzhi Li and Pengfei Yu and Zide Liu and Wei He and Xuhao Pan and Xudong Rao and Tao Wei and Wei Chen },
  journal={arXiv preprint arXiv:2502.18302},
  year={ 2025 }
}
Comments on this paper