ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.00050
176
127

Semantic Image Synthesis via Diffusion Models

30 June 2022
Weilun Wang
Weilun Wang
Wen-gang Zhou
Dongdong Chen
Dong Chen
Lu Yuan
Houqiang Li
    DiffM
ArXivPDFHTML
Abstract

Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks compared with Generative Adversarial Nets (GANs). Recent work on semantic image synthesis mainly follows the de facto GAN-based approaches, which may lead to unsatisfactory quality or diversity of generated images. In this paper, we propose a novel framework based on DDPM for semantic image synthesis. Unlike previous conditional diffusion model directly feeds the semantic layout and noisy image as input to a U-Net structure, which may not fully leverage the information in the input semantic mask, our framework processes semantic layout and noisy image differently. It feeds noisy image to the encoder of the U-Net structure while the semantic layout to the decoder by multi-layer spatially-adaptive normalization operators. To further improve the generation quality and semantic interpretability in semantic image synthesis, we introduce the classifier-free guidance sampling strategy, which acknowledge the scores of an unconditional model for sampling process. Extensive experiments on four benchmark datasets demonstrate the effectiveness of our proposed method, achieving state-of-the-art performance in terms of fidelity (FID) and diversity (LPIPS). Our code and pretrained models are available atthis https URL.

View on arXiv
@article{zhou2025_2207.00050,
  title={ Semantic Image Synthesis via Diffusion Models },
  author={ Wengang Zhou and Weilun Wang and Wengang Zhou and Dongdong Chen and Dong Chen and Lu Yuan and Houqiang Li },
  journal={arXiv preprint arXiv:2207.00050},
  year={ 2025 }
}
Comments on this paper