Bootstrapping Language Models with DPO Implicit Rewards

Human alignment in large language models (LLMs) is an active area of research. A recent groundbreaking work, direct preference optimization (DPO), has greatly simplified the process from past work in reinforcement learning from human feedback (RLHF) by bypassing the reward learning stage in RLHF. DPO, after training, provides an implicit reward model. In this work, we make a novel observation that this implicit reward model can by itself be used in a bootstrapping fashion to further align the LLM. Our approach is to use the rewards from a current LLM to construct a preference dataset, which is then used in subsequent DPO rounds. We incorporate two refinements to further improve our approach: 1) length-regularized reward shaping to make the preference dataset length-unbiased; 2) experience replay to enhance the quality of the preference dataset. Our approach, named self-alignment with DPO ImpliCit rEwards (DICE), shows great improvements in alignment. It achieves an increase of more than 8 in lengthcontrolled win rate on AlpacaEval 2 for all the different base models that we tried, without relying on external feedback. Our code is available atthis https URL.
View on arXiv@article{chen2025_2406.09760, title={ Bootstrapping Language Models with DPO Implicit Rewards }, author={ Changyu Chen and Zichen Liu and Chao Du and Tianyu Pang and Qian Liu and Arunesh Sinha and Pradeep Varakantham and Min Lin }, journal={arXiv preprint arXiv:2406.09760}, year={ 2025 } }