Nearly Horizon-Free Offline Reinforcement Learning
- OffRL
We revisit offline reinforcement learning on episodic time-homogeneous tabular Markov Decision Processes with states, actions and planning horizon . Given the collected episodes data with minimum cumulative reaching probability , we obtain the first set of nearly -free sample complexity bounds for evaluation and planning using the empirical MDPs: 1.For the offline evaluation, we obtain an error rate, which matches the lower bound and does not have additional dependency on in higher-order term, that is different from previous works~\citep{yin2020near,yin2020asymptotically}. 2.For the offline policy optimization, we obtain an error rate, improving upon the best known result by \cite{cui2020plug}, which has additional and factors in the main term. Furthermore, this bound approaches the lower bound up to logarithmic factors and a high-order term. To the best of our knowledge, these are the first set of nearly horizon-free bounds in offline reinforcement learning.
View on arXiv