45
0

Fine-Tuning Diffusion Generative Models via Rich Preference Optimization

Abstract

We introduce Rich Preference Optimization (RPO), a novel pipeline that leverages rich feedback signals to improve the curation of preference pairs for fine-tuning text-to-image diffusion models. Traditional methods, like Diffusion-DPO, often rely solely on reward model labeling, which can be opaque, offer limited insights into the rationale behind preferences, and are prone to issues such as reward hacking or overfitting. In contrast, our approach begins with generating detailed critiques of synthesized images to extract reliable and actionable image editing instructions. By implementing these instructions, we create refined images, resulting in synthetic, informative preference pairs that serve as enhanced tuning datasets. We demonstrate the effectiveness of our pipeline and the resulting datasets in fine-tuning state-of-the-art diffusion models.

View on arXiv
@article{zhao2025_2503.11720,
  title={ Fine-Tuning Diffusion Generative Models via Rich Preference Optimization },
  author={ Hanyang Zhao and Haoxian Chen and Yucheng Guo and Genta Indra Winata and Tingting Ou and Ziyu Huang and David D. Yao and Wenpin Tang },
  journal={arXiv preprint arXiv:2503.11720},
  year={ 2025 }
}
Comments on this paper