ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.01692
46
0

A Descriptive and Normative Theory of Human Beliefs in RLHF

2 June 2025
Sylee Dandekar
Shripad Deshmukh
Frank Chiu
W. B. Knox
S. Niekum
ArXiv (abs)PDFHTML
Main:-1 Pages
9 Figures
Bibliography:1 Pages
2 Tables
Appendix:15 Pages
Abstract

Human preferences in RLHF are typically modeled as a function of the human's reward function or corresponding optimal state-action values. In this work, we propose that human beliefs about the capabilities of the agent being trained also play a key role in preference generation. We examine two questions related to this hypothesis, one descriptive and one normative, respectively: Do human labelers' beliefs about agent capabilities affect the preferences that they provide? And what is the ideal set of beliefs about an agent -- and resulting preferences -- for humans to have? We propose a new preference model that incorporates human beliefs and provide a normative theory that bounds the error on the final learned policy based on the \textit{mismatch} between the human's beliefs and an idealized set of beliefs. We then confirm via a human study that beliefs about agent capabilities do, in fact, significantly affect preferences and can be influenced through simple interventions. Additionally, we empirically show through synthetic experiments that it is often suboptimal for human preference labelers to assume agent optimality. Collectively, these results theoretically and empirically demonstrate how reducing the mismatch between human beliefs and agent capabilities can lead to more performant RLHF and point toward new best practices for RLHF practitioners.

View on arXiv
@article{dandekar2025_2506.01692,
  title={ A Descriptive and Normative Theory of Human Beliefs in RLHF },
  author={ Sylee Dandekar and Shripad Deshmukh and Frank Chiu and W. Bradley Knox and Scott Niekum },
  journal={arXiv preprint arXiv:2506.01692},
  year={ 2025 }
}
Comments on this paper