302

Strategyproof Reinforcement Learning from Human Feedback

1 Tables
Appendix:27 Pages
Abstract

We study Reinforcement Learning from Human Feedback (RLHF), where multiple individuals with diverse preferences provide feedback strategically to sway the final policy in their favor. We show that existing RLHF methods are not strategyproof, which can result in learning a substantially misaligned policy even when only one out of kk individuals reports their preferences strategically. In turn, we also find that any strategyproof RLHF algorithm must perform kk-times worse than the optimal policy, highlighting an inherent trade-off between incentive alignment and policy alignment. We then propose a pessimistic median algorithm that, under appropriate coverage assumptions, is approximately strategyproof and converges to the optimal policy as the number of individuals and samples increases.

View on arXiv
Comments on this paper