Learning Guarantee of Reward Modeling Using Deep Neural Networks

In this work, we study the learning theory of reward modeling with pairwise comparison data using deep neural networks. We establish a novel non-asymptotic regret bound for deep reward estimators in a non-parametric setting, which depends explicitly on the network architecture. Furthermore, to underscore the critical importance of clear human beliefs, we introduce a margin-type condition that assumes the conditional winning probability of the optimal action in pairwise comparisons is significantly distanced from 1/2. This condition enables a sharper regret bound, which substantiates the empirical efficiency of Reinforcement Learning from Human Feedback and highlights clear human beliefs in its success. Notably, this improvement stems from high-quality pairwise comparison data implied by the margin-type condition, is independent of the specific estimators used, and thus applies to various learning algorithms and models.
View on arXiv@article{luo2025_2505.06601, title={ Learning Guarantee of Reward Modeling Using Deep Neural Networks }, author={ Yuanhang Luo and Yeheng Ge and Ruijian Han and Guohao Shen }, journal={arXiv preprint arXiv:2505.06601}, year={ 2025 } }