311
v1v2 (latest)

Q-learning with Uniformly Bounded Variance: Large Discounting is Not a Barrier to Fast Learning

IEEE Transactions on Automatic Control (TAC), 2020
Abstract

Sample complexity bounds are a common performance metric in the Reinforcement Learning literature. In the discounted cost, infinite horizon setting, all of the known bounds have a factor that is a polynomial in 1/(1γ)1/(1-\gamma), where γ<1\gamma < 1 is the discount factor. For a large discount factor, these bounds seem to imply that a very large number of samples is required to achieve an ε\varepsilon-optimal policy. The objective of the present work is to introduce a new class of algorithms that have sample complexity uniformly bounded for all γ<1\gamma < 1. One may argue that this is impossible, due to a recent min-max lower bound. The explanation is that this previous lower bound is for a specific problem, which we modify, without compromising the ultimate objective of obtaining an ε\varepsilon-optimal policy. Specifically, we show that the asymptotic covariance of the Q-learning algorithm with an optimized step-size sequence is a quadratic function of 1/(1γ)1/(1-\gamma); an expected, and essentially known result. The new relative Q-learning algorithm proposed here is shown to have asymptotic covariance that is a quadratic in 1/(1ργ)1/(1- \rho^* \gamma), where 1ρ>01 - \rho^* > 0 is an upper bound on the spectral gap of an optimal transition matrix.

View on arXiv
Comments on this paper