82
v1v2 (latest)

Sample-Efficient Reinforcement Learning from Human Feedback via Information-Directed Sampling

IEEE Transactions on Information Theory (IEEE Trans. Inf. Theory), 2025
Main:17 Pages
Bibliography:3 Pages
Appendix:13 Pages
Abstract

We study the problem of reinforcement learning from human feedback (RLHF), a critical problem in training large language models, from a theoretical perspective. Our main contribution is the design of novel sample-efficient RLHF algorithms based on information-directed sampling (IDS), an online decision-making principle inspired by information theory. Our algorithms maximize the sum of the value function and a mutual information term that encourages exploration of the unknown environment (which quantifies the information gained about the environment through observed human feedback data). To tackle the challenge of large state spaces and improve sample efficiency, we construct a simplified \emph{surrogate environment} and introduce a novel distance measure (named the \emph{g\ell_g-distance}), enabling our IDS-based algorithm to achieve a Bayesian regret upper bound of order O(H32log(K(ϵ))T)O(H^{\frac{3}{2}}\sqrt{\log(K(\epsilon)) T}), where HH is the episode length, TT is the number of episode and K(ϵ)K(\epsilon) is related to the covering number of the environment. Specializing to the tabular settings, this regret bound is of order O~(H2SAT)\tilde{O}(H^2\sqrt{SAT}), where SS and AA are the numbers of states and actions. Finally, we propose an Approximate-IDS algorithm that is computationally more efficient while maintaining nearly the same sample efficiency. The design principle of this approximate algorithm is not only effective in RLHF settings but also applicable to the standard RL framework. Moreover, our work showcases the value of information theory in reinforcement learning and in the training of large language models.

View on arXiv
Comments on this paper