48
0

One-Way Ticket:Time-Independent Unified Encoder for Distilling Text-to-Image Diffusion Models

Main:8 Pages
17 Figures
Bibliography:4 Pages
5 Tables
Appendix:11 Pages
Abstract

Text-to-Image (T2I) diffusion models have made remarkable advancements in generative modeling; however, they face a trade-off between inference speed and image quality, posing challenges for efficient deployment. Existing distilled T2I models can generate high-fidelity images with fewer sampling steps, but often struggle with diversity and quality, especially in one-step models. From our analysis, we observe redundant computations in the UNet encoders. Our findings suggest that, for T2I diffusion models, decoders are more adept at capturing richer and more explicit semantic information, while encoders can be effectively shared across decoders from diverse time steps. Based on these observations, we introduce the first Time-independent Unified Encoder TiUE for the student model UNet architecture, which is a loop-free image generation approach for distilling T2I diffusion models. Using a one-pass scheme, TiUE shares encoder features across multiple decoder time steps, enabling parallel sampling and significantly reducing inference time complexity. In addition, we incorporate a KL divergence term to regularize noise prediction, which enhances the perceptual realism and diversity of the generated images. Experimental results demonstrate that TiUE outperforms state-of-the-art methods, including LCM, SD-Turbo, and SwiftBrushv2, producing more diverse and realistic results while maintaining the computational efficiency.

View on arXiv
@article{li2025_2505.21960,
  title={ One-Way Ticket:Time-Independent Unified Encoder for Distilling Text-to-Image Diffusion Models },
  author={ Senmao Li and Lei Wang and Kai Wang and Tao Liu and Jiehang Xie and Joost van de Weijer and Fahad Shahbaz Khan and Shiqi Yang and Yaxing Wang and Jian Yang },
  journal={arXiv preprint arXiv:2505.21960},
  year={ 2025 }
}
Comments on this paper