318

Adaptive Doubly Robust Estimator

Abstract

We propose a doubly robust (DR) estimator for off-policy evaluation (OPE) from data obtained via multi-armed bandit (MAB) algorithms. The goal of OPE is to evaluate a new policy using historical data. Because the MAB algorithms sequentially updates the policy based on past observations, the generated samples are not independent and identically distributed (i.i.d.). To conduct OPE from dependent samples, we propose an OPE estimator with asymptotic normality even under the dependency. In particular, we focus on a DR estimator, which consists of an inverse probability weighting (IPW) component and an estimator of the conditionally expected outcome. The proposed adaptive DR estimator only requires the convergence rate conditions of the nuisance estimators and the other mild regularity conditions; that is, we do not impose a specific time-series structure and Donsker's condition. We investigate the effectiveness by using benchmark datasets compared to a past proposed DR estimator with double/debiased machine learning and an adaptive version of an augmented IPW estimator.

View on arXiv
Comments on this paper