28
5

A Note on Target Q-learning For Solving Finite MDPs with A Generative Oracle

Abstract

Q-learning with function approximation could diverge in the off-policy setting and the target network is a powerful technique to address this issue. In this manuscript, we examine the sample complexity of the associated target Q-learning algorithm in the tabular case with a generative oracle. We point out a misleading claim in [Lee and He, 2020] and establish a tight analysis. In particular, we demonstrate that the sample complexity of the target Q-learning algorithm in [Lee and He, 2020] is O~(S2A2(1γ)5ε2)\widetilde{\mathcal O}(|\mathcal S|^2|\mathcal A|^2 (1-\gamma)^{-5}\varepsilon^{-2}). Furthermore, we show that this sample complexity is improved to O~(SA(1γ)5ε2)\widetilde{\mathcal O}(|\mathcal S||\mathcal A| (1-\gamma)^{-5}\varepsilon^{-2}) if we can sequentially update all state-action pairs and O~(SA(1γ)4ε2)\widetilde{\mathcal O}(|\mathcal S||\mathcal A| (1-\gamma)^{-4}\varepsilon^{-2}) if γ\gamma is further in (1/2,1)(1/2, 1). Compared with the vanilla Q-learning, our results conclude that the introduction of a periodically-frozen target Q-function does not sacrifice the sample complexity.

View on arXiv
Comments on this paper