Finite-Sample Analysis of the Monte Carlo Exploring Starts Algorithm for Reinforcement Learning

Monte Carlo Exploring Starts (MCES), which aims to learn the optimal policy using only sample returns, is a simple and natural algorithm in reinforcement learning which has been shown to converge under various conditions. However, the convergence rate analysis for MCES-style algorithms in the form of sample complexity has received very little attention. In this paper we develop a finite sample bound for a modified MCES algorithm which solves the stochastic shortest path problem. To this end, we prove a novel result on the convergence rate of the policy iteration algorithm. This result implies that with probability at least , the algorithm returns an optimal policy after sampled episodes, where and denote the number of states and actions respectively, is a proxy for episode length, and hides logarithmic factors and constants depending on the rewards of the environment that are assumed to be known.
View on arXiv