Empirical Policy Evaluation with Supergraphs
IEEE Journal on Selected Areas in Information Theory (JSAIT), 2020
- OffRL
Abstract
We devise and analyze algorithms for the empirical policy evaluation problem in reinforcement learning. Our algorithms explore backward from high-cost states to find high-value ones, in contrast to forward approaches that work forward from all states. While several papers have demonstrated the utility of backward exploration empirically, we conduct rigorous analyses which show that our algorithms can reduce average-case sample complexity from to as low as .
View on arXivComments on this paper
