Constrained Markov Decision Processes (CMDPs) formalize sequential decision-making problems whose objective is to minimize a cost function while satisfying constraints on various cost functions. In this paper, we consider the setting of episodic fixed-horizon CMDPs. We propose an online algorithm which leverages the linear programming formulation of finite-horizon CMDP for repeated optimistic planning to provide a probably approximately correct (PAC) guarantee on the number of episodes needed to ensure an -optimal policy, i.e., with resulting objective value within of the optimal value and satisfying the constraints within -tolerance, with probability at least . The number of episodes needed is shown to be of the order , where is the upper bound on the number of possible successor states for a state-action pair. Therefore, if , the number of episodes needed have a linear dependence on the state and action space sizes and , respectively, and quadratic dependence on the time horizon .
View on arXiv