17
0

Towards Self-Improvement of Diffusion Models via Group Preference Optimization

Abstract

Aligning text-to-image (T2I) diffusion models with Direct Preference Optimization (DPO) has shown notable improvements in generation quality. However, applying DPO to T2I faces two challenges: the sensitivity of DPO to preference pairs and the labor-intensive process of collecting and annotating high-quality data. In this work, we demonstrate that preference pairs with marginal differences can degrade DPO performance. Since DPO relies exclusively on relative ranking while disregarding the absolute difference of pairs, it may misclassify losing samples as wins, or vice versa. We empirically show that extending the DPO from pairwise to groupwise and incorporating reward standardization for reweighting leads to performance gains without explicit data selection. Furthermore, we propose Group Preference Optimization (GPO), an effective self-improvement method that enhances performance by leveraging the model's own capabilities without requiring external data. Extensive experiments demonstrate that GPO is effective across various diffusion models and tasks. Specifically, combining with widely used computer vision models, such as YOLO and OCR, the GPO improves the accurate counting and text rendering capabilities of the Stable Diffusion 3.5 Medium by 20 percentage points. Notably, as a plug-and-play method, no extra overhead is introduced during inference.

View on arXiv
@article{chen2025_2505.11070,
  title={ Towards Self-Improvement of Diffusion Models via Group Preference Optimization },
  author={ Renjie Chen and Wenfeng Lin and Yichen Zhang and Jiangchuan Wei and Boyuan Liu and Chao Feng and Jiao Ran and Mingyu Guo },
  journal={arXiv preprint arXiv:2505.11070},
  year={ 2025 }
}
Comments on this paper