16
0

Random Conditioning with Distillation for Data-Efficient Diffusion Model Compression

Abstract

Diffusion models generate high-quality images through progressive denoising but are computationally intensive due to large model sizes and repeated sampling. Knowledge distillation, which transfers knowledge from a complex teacher to a simpler student model, has been widely studied in recognition tasks, particularly for transferring concepts unseen during student training. However, its application to diffusion models remains underexplored, especially in enabling student models to generate concepts not covered by the training images. In this work, we propose Random Conditioning, a novel approach that pairs noised images with randomly selected text conditions to enable efficient, image-free knowledge distillation. By leveraging this technique, we show that the student can generate concepts unseen in the training images. When applied to conditional diffusion model distillation, our method allows the student to explore the condition space without generating condition-specific images, resulting in notable improvements in both generation quality and efficiency. This promotes resource-efficient deployment of generative diffusion models, broadening their accessibility for both research and real-world applications. Code, models, and datasets are available atthis https URL.

View on arXiv
@article{kim2025_2504.02011,
  title={ Random Conditioning with Distillation for Data-Efficient Diffusion Model Compression },
  author={ Dohyun Kim and Sehwan Park and Geonhee Han and Seung Wook Kim and Paul Hongsuck Seo },
  journal={arXiv preprint arXiv:2504.02011},
  year={ 2025 }
}
Comments on this paper