33
27

Learning in Markov Decision Processes under Constraints

Abstract

We consider reinforcement learning (RL) in Markov Decision Processes in which an agent repeatedly interacts with an environment that is modeled by a controlled Markov process. At each time step tt, it earns a reward, and also incurs a cost-vector consisting of MM costs. We design model-based RL algorithms that maximize the cumulative reward earned over a time horizon of TT time-steps, while simultaneously ensuring that the average values of the MM cost expenditures are bounded by agent-specified thresholds ciub,i=1,2,,Mc^{ub}_i,i=1,2,\ldots,M. In order to measure the performance of a reinforcement learning algorithm that satisfies the average cost constraints, we define an M+1M+1 dimensional regret vector that is composed of its reward regret, and MM cost regrets. The reward regret measures the sub-optimality in the cumulative reward, while the ii-th component of the cost regret vector is the difference between its ii-th cumulative cost expense and the expected cost expenditures TciubTc^{ub}_i. We prove that the expected value of the regret vector of UCRL-CMDP, is upper-bounded as O~(T2\slash3)\tilde{O}\left(T^{2\slash 3}\right), where TT is the time horizon. We further show how to reduce the regret of a desired subset of the MM costs, at the expense of increasing the regrets of rewards and the remaining costs. To the best of our knowledge, ours is the only work that considers non-episodic RL under average cost constraints, and derive algorithms that can~\emph{tune the regret vector} according to the agent's requirements on its cost regrets.

View on arXiv
Comments on this paper