CTR-Driven Advertising Image Generation with Multimodal Large Language Models

In web data, advertising images are crucial for capturing user attention and improving advertising effectiveness. Most existing methods generate background for products primarily focus on the aesthetic quality, which may fail to achieve satisfactory online performance. To address this limitation, we explore the use of Multimodal Large Language Models (MLLMs) for generating advertising images by optimizing for Click-Through Rate (CTR) as the primary objective. Firstly, we build targeted pre-training tasks, and leverage a large-scale e-commerce multimodal dataset to equip MLLMs with initial capabilities for advertising image generation tasks. To further improve the CTR of generated images, we propose a novel reward model to fine-tune pre-trained MLLMs through Reinforcement Learning (RL), which can jointly utilize multimodal features and accurately reflect user click preferences. Meanwhile, a product-centric preference optimization strategy is developed to ensure that the generated background content aligns with the product characteristics after fine-tuning, enhancing the overall relevance and effectiveness of the advertising images. Extensive experiments have demonstrated that our method achieves state-of-the-art performance in both online and offline metrics. Our code and pre-trained models are publicly available at:this https URL.
View on arXiv@article{chen2025_2502.06823, title={ CTR-Driven Advertising Image Generation with Multimodal Large Language Models }, author={ Xingye Chen and Wei Feng and Zhenbang Du and Weizhen Wang and Yanyin Chen and Haohan Wang and Linkai Liu and Yaoyu Li and Jinyuan Zhao and Yu Li and Zheng Zhang and Jingjing Lv and Junjie Shen and Zhangang Lin and Jingping Shao and Yuanjie Shao and Xinge You and Changxin Gao and Nong Sang }, journal={arXiv preprint arXiv:2502.06823}, year={ 2025 } }