205

Sample Complexity and Overparameterization Bounds for Projection-Free Neural TD Learning

IEEE Transactions on Automatic Control (IEEE TAC), 2021
Abstract

We study the dynamics of temporal-difference learning with neural network-based value function approximation over a general state space, namely, \emph{Neural TD learning}. Existing analysis of neural TD learning relies on either infinite width-analysis or constraining the network parameters in a (random) compact set; as a result, an extra projection step is required at each iteration. This paper establishes a new convergence analysis of neural TD learning \emph{without any projection}. We show that the projection-free TD learning equipped with a two-layer ReLU network of any width exceeding poly(ν,1/ϵ)poly(\overline{\nu},1/\epsilon) converges to the true value function with error ϵ\epsilon given poly(ν,1/ϵ)poly(\overline{\nu},1/\epsilon) iterations or samples, where ν\overline{\nu} is an upper bound on the RKHS norm of the value function induced by the neural tangent kernel. Our sample complexity and overparameterization bounds are based on a drift analysis of the network parameters as a stopped random process in the lazy training regime.

View on arXiv
Comments on this paper