480

Nearly Optimal Sample Complexity of Offline KL-Regularized Contextual Bandits under Single-Policy Concentrability

Main:10 Pages
3 Figures
Bibliography:5 Pages
1 Tables
Appendix:20 Pages
Abstract

KL-regularized policy optimization has become a workhorse in learning-based decision making, while its theoretical understanding is still very limited. Although recent progress has been made towards settling the sample complexity of KL-regularized contextual bandits, existing sample complexity bounds are either O~(ϵ2)\tilde{O}(\epsilon^{-2}) under single-policy concentrability or O~(ϵ1)\tilde{O}(\epsilon^{-1}) under all-policy concentrability. In this paper, we propose the \emph{first} algorithm with O~(ϵ1)\tilde{O}(\epsilon^{-1}) sample complexity under single-policy concentrability for offline contextual bandits. Our algorithm is designed for general function approximation and based on the principle of \emph{pessimism in the face of uncertainty}. The core of our proof leverages the strong convexity of the KL regularization, and the conditional non-negativity of the gap between the true reward and its pessimistic estimator to refine a mean-value-type risk upper bound to its extreme. This in turn leads to a novel covariance-based analysis, effectively bypassing the need for uniform control over the discrepancy between any two functions in the function class. The near-optimality of our algorithm is demonstrated by an Ω~(ϵ1)\tilde{\Omega}(\epsilon^{-1}) lower bound. Furthermore, we extend our algorithm to contextual dueling bandits and achieve a similar nearly optimal sample complexity.

View on arXiv
Comments on this paper