32
5

Bridging and Modeling Correlations in Pairwise Data for Direct Preference Optimization

Yuxin Jiang
Bo Huang
Yufei Wang
Xingshan Zeng
Liangyou Li
Yasheng Wang
Xin Jiang
Lifeng Shang
Ruiming Tang
Wei Wang
Abstract

Direct preference optimization (DPO), a widely adopted offline preference optimization algorithm, aims to align large language models (LLMs) with human-desired behaviors using pairwise preference data. However, the generation of the winning response and the losing response within pairwise data are typically isolated, leading to weak correlations between them as well as suboptimal alignment performance. To address this issue, we propose an effective framework for Bridging and Modeling Correlations in pairwise data, named BMC. Firstly, we increase the consistency and informativeness of the pairwise preference signals through targeted modifications, synthesizing a pseudo-winning response by improving the losing response with the winning response as a reference. Secondly, we identify that DPO alone is insufficient to model these correlations and capture nuanced variations. Therefore, we propose learning token-level correlations by dynamically leveraging the policy model's confidence during training. Comprehensive experiments on QA, math, and instruction-following tasks demonstrate the effectiveness of our approach, significantly surpassing competitive baselines, including DPO. Additionally, our in-depth quantitative analysis reveals the reasons behind our method's superior performance over DPO and showcases its versatility to other DPO variants. We release our repository atthis https URL.

View on arXiv
@article{jiang2025_2408.07471,
  title={ Bridging and Modeling Correlations in Pairwise Data for Direct Preference Optimization },
  author={ Yuxin Jiang and Bo Huang and Yufei Wang and Xingshan Zeng and Liangyou Li and Yasheng Wang and Xin Jiang and Lifeng Shang and Ruiming Tang and Wei Wang },
  journal={arXiv preprint arXiv:2408.07471},
  year={ 2025 }
}
Comments on this paper