318

Randomized Exploration is Near-Optimal for Tabular MDP

Neural Information Processing Systems (NeurIPS), 2021
Abstract

We study exploration using randomized value functions in Thompson Sampling (TS)-like algorithms in reinforcement learning. This type of algorithms enjoys appealing empirical performance. We show that when we use 1) a single random seed in each episode, and 2) a Bernstein-type magnitude of noise, we obtain a worst-case O~(HSAT)\widetilde{O}\left(H\sqrt{SAT}\right) regret bound for episodic time-inhomogeneous Markov Decision Process where SS is the size of state space, AA is the size of action space, HH is the planning horizon and TT is the number of interactions. This bound polynomially improves all existing bounds for TS-like algorithms based on randomized value functions, and for the first time, matches the Ω(HSAT)\Omega\left(H\sqrt{SAT}\right) lower bound up to logarithmic factors. Our result highlights that randomized exploration can be near-optimal, which was previously only achieved by optimistic algorithms.

View on arXiv
Comments on this paper