13
4

Improved Bayesian Regret Bounds for Thompson Sampling in Reinforcement Learning

Abstract

In this paper, we prove the first Bayesian regret bounds for Thompson Sampling in reinforcement learning in a multitude of settings. We simplify the learning problem using a discrete set of surrogate environments, and present a refined analysis of the information ratio using posterior consistency. This leads to an upper bound of order O~(Hdl1T)\widetilde{O}(H\sqrt{d_{l_1}T}) in the time inhomogeneous reinforcement learning problem where HH is the episode length and dl1d_{l_1} is the Kolmogorov l1l_1-dimension of the space of environments. We then find concrete bounds of dl1d_{l_1} in a variety of settings, such as tabular, linear and finite mixtures, and discuss how how our results are either the first of their kind or improve the state-of-the-art.

View on arXiv
Comments on this paper