24
3

Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs

Abstract

We consider the problem of learning an ε\varepsilon-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators. Given access to a generative model, we achieve rate-optimal sample complexity by performing a simple, \emph{perturbed} version of least-squares value iteration with orthogonal trigonometric polynomials as features. Key to our solution is a novel projection technique based on ideas from harmonic analysis. Our~O~(ϵ2d/(ν+1))\widetilde{\mathcal{O}}(\epsilon^{-2-d/(\nu+1)}) sample complexity, where dd is the dimension of the state-action space and ν\nu the order of smoothness, recovers the state-of-the-art result of discretization approaches for the special case of Lipschitz MDPs (ν=0)(\nu=0). At the same time, for ν\nu\to\infty, it recovers and greatly generalizes the O(ϵ2)\mathcal{O}(\epsilon^{-2}) rate of low-rank MDPs, which are more amenable to regression approaches. In this sense, our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.

View on arXiv
Comments on this paper