Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment

Modeling human preferences is crucial for aligning foundation models with human values. Traditional reward modeling methods, such as the Bradley-Terry (BT) reward model, fall short in expressiveness, particularly in addressing intransitive preferences. In this paper, we introduce preference embedding, an approach that embeds responses into a latent space to capture intricate preference structures efficiently, achieving linear query complexity. Additionally, we propose preference score-based General Preference Optimization (GPO), which generalizes reward-based reinforcement learning from human feedback (RLHF). Experimental results show that our General Preference embedding Model (GPM) consistently outperforms the BT reward model on the RewardBench benchmark and effectively models cyclic preferences where any BT reward model behaves like a random guess. Furthermore, evaluations on downstream tasks such as AlpacaEval2.0, following the language model post-training with GPO and our general preference model, reveal performance improvements over BT models. These findings indicate that our method may enhance the alignment of foundation models with nuanced human values. The code is available atthis https URL.
View on arXiv@article{zhang2025_2410.02197, title={ Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment }, author={ Yifan Zhang and Ge Zhang and Yue Wu and Kangping Xu and Quanquan Gu }, journal={arXiv preprint arXiv:2410.02197}, year={ 2025 } }