ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.01645
42
0

DesignDiffusion: High-Quality Text-to-Design Image Generation with Diffusion Models

3 March 2025
Zhendong Wang
Jianmin Bao
Shuyang Gu
Dong Chen
Wengang Zhou
H. Li
    DiffM
ArXivPDFHTML
Abstract

In this paper, we present DesignDiffusion, a simple yet effective framework for the novel task of synthesizing design images from textual descriptions. A primary challenge lies in generating accurate and style-consistent textual and visual content. Existing works in a related task of visual text generation often focus on generating text within given specific regions, which limits the creativity of generation models, resulting in style or color inconsistencies between textual and visual elements if applied to design image generation. To address this issue, we propose an end-to-end, one-stage diffusion-based framework that avoids intricate components like position and layout modeling. Specifically, the proposed framework directly synthesizes textual and visual design elements from user prompts. It utilizes a distinctive character embedding derived from the visual text to enhance the input prompt, along with a character localization loss for enhanced supervision during text generation. Furthermore, we employ a self-play Direct Preference Optimization fine-tuning strategy to improve the quality and accuracy of the synthesized visual text. Extensive experiments demonstrate that DesignDiffusion achieves state-of-the-art performance in design image generation.

View on arXiv
@article{wang2025_2503.01645,
  title={ DesignDiffusion: High-Quality Text-to-Design Image Generation with Diffusion Models },
  author={ Zhendong Wang and Jianmin Bao and Shuyang Gu and Dong Chen and Wengang Zhou and Houqiang Li },
  journal={arXiv preprint arXiv:2503.01645},
  year={ 2025 }
}
Comments on this paper