Generative Dataset Distillation using Min-Max Diffusion Model

In this paper, we address the problem of generative dataset distillation that utilizes generative models to synthesize images. The generator may produce any number of images under a preserved evaluation time. In this work, we leverage the popular diffusion model as the generator to compute a surrogate dataset, boosted by a min-max loss to control the dataset's diversity and representativeness during training. However, the diffusion model is time-consuming when generating images, as it requires an iterative generation process. We observe a critical trade-off between the number of image samples and the image quality controlled by the diffusion steps and propose Diffusion Step Reduction to achieve optimal performance. This paper details our comprehensive method and its performance. Our model achieved place in the generative track of \href{this https URL}{The First Dataset Distillation Challenge of ECCV2024}, demonstrating its superior performance.
View on arXiv@article{fan2025_2503.18626, title={ Generative Dataset Distillation using Min-Max Diffusion Model }, author={ Junqiao Fan and Yunjiao Zhou and Min Chang Jordan Ren and Jianfei Yang }, journal={arXiv preprint arXiv:2503.18626}, year={ 2025 } }