Training-free Stylized Text-to-Image Generation with Fast Inference

Although diffusion models exhibit impressive generative capabilities, existing methods for stylized image generation based on these models often require textual inversion or fine-tuning with style images, which is time-consuming and limits the practical applicability of large-scale diffusion models. To address these challenges, we propose a novel stylized image generation method leveraging a pre-trained large-scale diffusion model without requiring fine-tuning or any additional optimization, termed as OmniPainter. Specifically, we exploit the self-consistency property of latent consistency models to extract the representative style statistics from reference style images to guide the stylization process. Additionally, we then introduce the norm mixture of self-attention, which enables the model to query the most relevant style patterns from these statistics for the intermediate output content features. This mechanism also ensures that the stylized results align closely with the distribution of the reference style images. Our qualitative and quantitative experimental results demonstrate that the proposed method outperforms state-of-the-art approaches.
View on arXiv@article{ma2025_2505.19063, title={ Training-free Stylized Text-to-Image Generation with Fast Inference }, author={ Xin Ma and Yaohui Wang and Xinyuan Chen and Tien-Tsin Wong and Cunjian Chen }, journal={arXiv preprint arXiv:2505.19063}, year={ 2025 } }