An Improved Finite-time Analysis of Temporal Difference Learning with Deep Neural Networks

Abstract
Temporal difference (TD) learning algorithms with neural network function parameterization have well-established empirical success in many practical large-scale reinforcement learning tasks. However, theoretical understanding of these algorithms remains challenging due to the nonlinearity of the action-value approximation. In this paper, we develop an improved non-asymptotic analysis of the neural TD method with a general -layer neural network. New proof techniques are developed and an improved new sample complexity is derived. To our best knowledge, this is the first finite-time analysis of neural TD that achieves an complexity under the Markovian sampling, as opposed to the best known complexity in the existing literature.
View on arXivComments on this paper