20
31

On Well-posedness and Minimax Optimal Rates of Nonparametric Q-function Estimation in Off-policy Evaluation

Abstract

We study the off-policy evaluation (OPE) problem in an infinite-horizon Markov decision process with continuous states and actions. We recast the QQ-function estimation into a special form of the nonparametric instrumental variables (NPIV) estimation problem. We first show that under one mild condition the NPIV formulation of QQ-function estimation is well-posed in the sense of L2L^2-measure of ill-posedness with respect to the data generating distribution, bypassing a strong assumption on the discount factor γ\gamma imposed in the recent literature for obtaining the L2L^2 convergence rates of various QQ-function estimators. Thanks to this new well-posed property, we derive the first minimax lower bounds for the convergence rates of nonparametric estimation of QQ-function and its derivatives in both sup-norm and L2L^2-norm, which are shown to be the same as those for the classical nonparametric regression (Stone, 1982). We then propose a sieve two-stage least squares estimator and establish its rate-optimality in both norms under some mild conditions. Our general results on the well-posedness and the minimax lower bounds are of independent interest to study not only other nonparametric estimators for QQ-function but also efficient estimation on the value of any target policy in off-policy settings.

View on arXiv
Comments on this paper