278

Regret Bounds for Discounted MDPs

Abstract

Recently, it has been shown that carefully designed reinforcement learning (RL) algorithms can achieve near-optimal regret in the episodic or the average-reward setting. However, in practice, RL algorithms are applied mostly to the infinite-horizon discounted-reward setting, so it is natural to ask what the lowest regret an algorithm can achieve is in this case, and how close to the optimal the regrets of existing RL algorithms are. In this paper, we prove a regret lower bound of Ω(SAT1γ1(1γ)2)\Omega\left(\frac{\sqrt{SAT}}{1 - \gamma} - \frac{1}{(1 - \gamma)^2}\right) when TSAT\geq SA on any learning algorithm for infinite-horizon discounted Markov decision processes (MDP), where SS and AA are the numbers of states and actions, TT is the number of actions taken, and γ\gamma is the discounting factor. We also show that a modified version of the double Q-learning algorithm gives a regret upper bound of O~(SAT(1γ)2.5)\tilde{O}\left(\frac{\sqrt{SAT}}{(1 - \gamma)^{2.5}}\right) when TSAT\geq SA. Compared to our bounds, previous best lower and upper bounds both have worse dependencies on TT and γ\gamma, while our dependencies on S,A,TS, A, T are optimal. The proof of our upper bound is inspired by recent advances in the analysis of Q-learning in the episodic setting, but the cyclic nature of infinite-horizon MDPs poses many new challenges.

View on arXiv
Comments on this paper