126

Statistical guarantees for continuous-time policy evaluation: blessing of ellipticity and new tradeoffs

Main:39 Pages
Bibliography:4 Pages
Appendix:12 Pages
Abstract

We study the estimation of the value function for continuous-time Markov diffusion processes using a single, discretely observed ergodic trajectory. Our work provides non-asymptotic statistical guarantees for the least-squares temporal-difference (LSTD) method, with performance measured in the first-order Sobolev norm. Specifically, the estimator attains an O(1/T)O(1 / \sqrt{T}) convergence rate when using a trajectory of length TT; notably, this rate is achieved as long as TT scales nearly linearly with both the mixing time of the diffusion and the number of basis functions employed.A key insight of our approach is that the ellipticity inherent in the diffusion process ensures robust performance even as the effective horizon diverges to infinity. Moreover, we demonstrate that the Markovian component of the statistical error can be controlled by the approximation error, while the martingale component grows at a slower rate relative to the number of basis functions. By carefully balancing these two sources of error, our analysis reveals novel trade-offs between approximation and statistical errors.

View on arXiv
Comments on this paper