ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.04347
31
0

CountDiffusion: Text-to-Image Synthesis with Training-Free Counting-Guidance Diffusion

7 May 2025
Y. Li
Pencheng Wan
Liang Han
Yaowei Wang
Liqiang Nie
Min Zhang
ArXivPDFHTML
Abstract

Stable Diffusion has advanced text-to-image synthesis, but training models to generate images with accurate object quantity is still difficult due to the high computational cost and the challenge of teaching models the abstract concept of quantity. In this paper, we propose CountDiffusion, a training-free framework aiming at generating images with correct object quantity from textual descriptions. CountDiffusion consists of two stages. In the first stage, an intermediate denoising result is generated by the diffusion model to predict the final synthesized image with one-step denoising, and a counting model is used to count the number of objects in this image. In the second stage, a correction module is used to correct the object quantity by changing the attention map of the object with universal guidance. The proposed CountDiffusion can be plugged into any diffusion-based text-to-image (T2I) generation models without further training. Experiment results demonstrate the superiority of our proposed CountDiffusion, which improves the accurate object quantity generation ability of T2I models by a large margin.

View on arXiv
@article{li2025_2505.04347,
  title={ CountDiffusion: Text-to-Image Synthesis with Training-Free Counting-Guidance Diffusion },
  author={ Yanyu Li and Pencheng Wan and Liang Han and Yaowei Wang and Liqiang Nie and Min Zhang },
  journal={arXiv preprint arXiv:2505.04347},
  year={ 2025 }
}
Comments on this paper