21
0

UVIP: Model-Free Approach to Evaluate Reinforcement Learning Algorithms

Abstract

Policy evaluation is an important instrument for the comparison of different algorithms in Reinforcement Learning (RL). Yet even a precise knowledge of the value function VπV^{\pi} corresponding to a policy π\pi does not provide reliable information on how far is the policy π\pi from the optimal one. We present a novel model-free upper value iteration procedure (UVIP)({\sf UVIP}) that allows us to estimate the suboptimality gap V(x)Vπ(x)V^{\star}(x) - V^{\pi}(x) from above and to construct confidence intervals for VV^\star. Our approach relies on upper bounds to the solution of the Bellman optimality equation via martingale approach. We provide theoretical guarantees for UVIP{\sf UVIP} under general assumptions and illustrate its performance on a number of benchmark RL problems.

View on arXiv
Comments on this paper