Adaptive Reward-Free Exploration

Reward-free exploration is a reinforcement learning setting studied by Jin et al. (2020), who address it by running several algorithms with regret guarantees in parallel. In our work, we instead give a more natural adaptive approach for reward-free exploration which directly reduces upper bounds on the maximum MDP estimation error. We show that, interestingly, our reward-free UCRL algorithm can be seen as a variant of an algorithm of Fiechter from 1994, originally proposed for a different objective that we call best-policy identification. We prove that RF-UCRL needs of order episodes to output, with probability , an -approximation of the optimal policy for any reward function. This bound improves over existing sample-complexity bounds in both the small and the small regimes. We further investigate the relative complexities of reward-free exploration and best-policy identification.
View on arXiv