Q-learning, which seeks to learn the optimal Q-function of a Markov decision process (MDP) in a model-free fashion, lies at the heart of reinforcement learning. When it comes to the synchronous setting (such that independent samples for all state-action pairs are drawn from a generative model in each iteration), substantial progress has been made towards understanding the sample efficiency of Q-learning. Consider a -discounted infinite-horizon MDP with state space and action space : to yield an entrywise -approximation of the optimal Q-function, state-of-the-art theory for Q-learning requires a sample size exceeding the order of , which fails to match existing minimax lower bounds. This gives rise to natural questions: what is the sharp sample complexity of Q-learning? Is Q-learning provably sub-optimal? This paper addresses these questions for the synchronous setting: (1) when (so that Q-learning reduces to TD learning), we prove that the sample complexity of TD learning is minimax optimal and scales as (up to log factor); (2) when , we settle the sample complexity of Q-learning to be on the order of (up to log factor). Our theory unveils the strict sub-optimality of Q-learning when , and rigorizes the negative impact of over-estimation in Q-learning. Finally, we extend our analysis to accommodate asynchronous Q-learning (i.e., the case with Markovian samples), sharpening the horizon dependency of its sample complexity to be .
View on arXiv