69

Learning Constrained Markov Decision Processes With Non-stationary Rewards and Constraints

Main:9 Pages
Bibliography:5 Pages
Appendix:28 Pages
Abstract

In constrained Markov decision processes (CMDPs) with adversarial rewards and constraints, a well-known impossibility result prevents any algorithm from attaining both sublinear regret and sublinear constraint violation, when competing against a best-in-hindsight policy that satisfies constraints on average. In this paper, we show that this negative result can be eased in CMDPs with non-stationary rewards and constraints, by providing algorithms whose performances smoothly degrade as non-stationarity increases. Specifically, we propose algorithms attaining O~(T+C)\tilde{\mathcal{O}} (\sqrt{T} + C) regret and positive constraint violation under bandit feedback, where CC is a corruption value measuring the environment non-stationarity. This can be Θ(T)\Theta(T) in the worst case, coherently with the impossibility result for adversarial CMDPs. First, we design an algorithm with the desired guarantees when CC is known. Then, in the case CC is unknown, we show how to obtain the same results by embedding such an algorithm in a general meta-procedure. This is of independent interest, as it can be applied to any non-stationary constrained online learning setting.

View on arXiv
Comments on this paper