24
0

Doubly Robust Alignment for Large Language Models

Main:10 Pages
6 Figures
Bibliography:11 Pages
11 Tables
Appendix:23 Pages
Abstract

This paper studies reinforcement learning from human feedback (RLHF) for aligning large language models with human preferences. While RLHF has demonstrated promising results, many algorithms are highly sensitive to misspecifications in the underlying preference model (e.g., the Bradley-Terry model), the reference policy, or the reward function, resulting in undesirable fine-tuning. To address model misspecification, we propose a doubly robust preference optimization algorithm that remains consistent when either the preference model or the reference policy is correctly specified (without requiring both). Our proposal demonstrates superior and more robust performance than state-of-the-art algorithms, both in theory and in practice. The code is available atthis https URL

View on arXiv
@article{xu2025_2506.01183,
  title={ Doubly Robust Alignment for Large Language Models },
  author={ Erhan Xu and Kai Ye and Hongyi Zhou and Luhan Zhu and Francesco Quinzan and Chengchun Shi },
  journal={arXiv preprint arXiv:2506.01183},
  year={ 2025 }
}
Comments on this paper