24
49

Nearly Horizon-Free Offline Reinforcement Learning

Abstract

We revisit offline reinforcement learning on episodic time-homogeneous Markov Decision Processes (MDP). For tabular MDP with SS states and AA actions, or linear MDP with anchor points and feature dimension dd, given the collected KK episodes data with minimum visiting probability of (anchor) state-action pairs dmd_m, we obtain nearly horizon HH-free sample complexity bounds for offline reinforcement learning when the total reward is upper bounded by 11. Specifically: 1. For offline policy evaluation, we obtain an O~(1Kdm)\tilde{O}\left(\sqrt{\frac{1}{Kd_m}} \right) error bound for the plug-in estimator, which matches the lower bound up to logarithmic factors and does not have additional dependency on poly(H,S,A,d)\mathrm{poly}\left(H, S, A, d\right) in higher-order term. 2.For offline policy optimization, we obtain an O~(1Kdm+min(S,d)Kdm)\tilde{O}\left(\sqrt{\frac{1}{Kd_m}} + \frac{\min(S, d)}{Kd_m}\right) sub-optimality gap for the empirical optimal policy, which approaches the lower bound up to logarithmic factors and a high-order term, improving upon the best known result by \cite{cui2020plug} that has additional poly(H,S,d)\mathrm{poly}\left(H, S, d\right) factors in the main term. To the best of our knowledge, these are the \emph{first} set of nearly horizon-free bounds for episodic time-homogeneous offline tabular MDP and linear MDP with anchor points. Central to our analysis is a simple yet effective recursion based method to bound a "total variance" term in the offline scenarios, which could be of individual interest.

View on arXiv
Comments on this paper