Policy-labeled Preference Learning: Is Preference Enough for RLHF?

To design rewards that align with human goals, Reinforcement Learning from Human Feedback (RLHF) has emerged as a prominent technique for learning reward functions from human preferences and optimizing policies via reinforcement learning algorithms. However, existing RLHF methods often misinterpret trajectories as being generated by an optimal policy, causing inaccurate likelihood estimation and suboptimal learning. Inspired by Direct Preference Optimization framework which directly learns optimal policy without explicit reward, we propose policy-labeled preference learning (PPL), to resolve likelihood mismatch issues by modeling human preferences with regret, which reflects behavior policy information. We also provide a contrastive KL regularization, derived from regret-based principles, to enhance RLHF in sequential decision making. Experiments in high-dimensional continuous control tasks demonstrate PPL's significant improvements in offline RLHF performance and its effectiveness in online settings.
View on arXiv@article{cho2025_2505.06273, title={ Policy-labeled Preference Learning: Is Preference Enough for RLHF? }, author={ Taehyun Cho and Seokhun Ju and Seungyub Han and Dohyeong Kim and Kyungjae Lee and Jungwoo Lee }, journal={arXiv preprint arXiv:2505.06273}, year={ 2025 } }