32
3

Tuning Timestep-Distilled Diffusion Model Using Pairwise Sample Optimization

Zichen Miao
Zhengyuan Yang
Kevin Lin
Ze Wang
Zicheng Liu
Lijuan Wang
Qiang Qiu
Abstract

Recent advancements in timestep-distilled diffusion models have enabled high-quality image generation that rivals non-distilled multi-step models, but with significantly fewer inference steps. While such models are attractive for applications due to the low inference cost and latency, fine-tuning them with a naive diffusion objective would result in degraded and blurry outputs. An intuitive alternative is to repeat the diffusion distillation process with a fine-tuned teacher model, which produces good results but is cumbersome and computationally intensive; the distillation training usually requires magnitude higher of training compute compared to fine-tuning for specific image styles. In this paper, we present an algorithm named pairwise sample optimization (PSO), which enables the direct fine-tuning of an arbitrary timestep-distilled diffusion model. PSO introduces additional reference images sampled from the current time-step distilled model, and increases the relative likelihood margin between the training images and reference images. This enables the model to retain its few-step generation ability, while allowing for fine-tuning of its output distribution. We also demonstrate that PSO is a generalized formulation which can be flexibly extended to both offline-sampled and online-sampled pairwise data, covering various popular objectives for diffusion model preference optimization. We evaluate PSO in both preference optimization and other fine-tuning tasks, including style transfer and concept customization. We show that PSO can directly adapt distilled models to human-preferred generation with both offline and online-generated pairwise preference image data. PSO also demonstrates effectiveness in style transfer and concept customization by directly tuning timestep-distilled diffusion models.

View on arXiv
@article{miao2025_2410.03190,
  title={ Tuning Timestep-Distilled Diffusion Model Using Pairwise Sample Optimization },
  author={ Zichen Miao and Zhengyuan Yang and Kevin Lin and Ze Wang and Zicheng Liu and Lijuan Wang and Qiang Qiu },
  journal={arXiv preprint arXiv:2410.03190},
  year={ 2025 }
}
Comments on this paper