27
23

ElasticDiffusion: Training-free Arbitrary Size Image Generation through Global-Local Content Separation

Abstract

Diffusion models have revolutionized image generation in recent years, yet they are still limited to a few sizes and aspect ratios. We propose ElasticDiffusion, a novel training-free decoding method that enables pretrained text-to-image diffusion models to generate images with various sizes. ElasticDiffusion attempts to decouple the generation trajectory of a pretrained model into local and global signals. The local signal controls low-level pixel information and can be estimated on local patches, while the global signal is used to maintain overall structural consistency and is estimated with a reference image. We test our method on CelebA-HQ (faces) and LAION-COCO (objects/indoor/outdoor scenes). Our experiments and qualitative results show superior image coherence quality across aspect ratios compared to MultiDiffusion and the standard decoding strategy of Stable Diffusion. Project page:this https URL

View on arXiv
@article{haji-ali2025_2311.18822,
  title={ ElasticDiffusion: Training-free Arbitrary Size Image Generation through Global-Local Content Separation },
  author={ Moayed Haji-Ali and Guha Balakrishnan and Vicente Ordonez },
  journal={arXiv preprint arXiv:2311.18822},
  year={ 2025 }
}
Comments on this paper