5
0

D-Fusion: Direct Preference Optimization for Aligning Diffusion Models with Visually Consistent Samples

Abstract

The practical applications of diffusion models have been limited by the misalignment between generated images and corresponding text prompts. Recent studies have introduced direct preference optimization (DPO) to enhance the alignment of these models. However, the effectiveness of DPO is constrained by the issue of visual inconsistency, where the significant visual disparity between well-aligned and poorly-aligned images prevents diffusion models from identifying which factors contribute positively to alignment during fine-tuning. To address this issue, this paper introduces D-Fusion, a method to construct DPO-trainable visually consistent samples. On one hand, by performing mask-guided self-attention fusion, the resulting images are not only well-aligned, but also visually consistent with given poorly-aligned images. On the other hand, D-Fusion can retain the denoising trajectories of the resulting images, which are essential for DPO training. Extensive experiments demonstrate the effectiveness of D-Fusion in improving prompt-image alignment when applied to different reinforcement learning algorithms.

View on arXiv
@article{hu2025_2505.22002,
  title={ D-Fusion: Direct Preference Optimization for Aligning Diffusion Models with Visually Consistent Samples },
  author={ Zijing Hu and Fengda Zhang and Kun Kuang },
  journal={arXiv preprint arXiv:2505.22002},
  year={ 2025 }
}
Comments on this paper