195
v1v2 (latest)

Beyond Slater's Condition in Online CMDPs with Stochastic and Adversarial Constraints

Main:9 Pages
12 Figures
Bibliography:4 Pages
1 Tables
Appendix:22 Pages
Abstract

We study \emph{online episodic Constrained Markov Decision Processes} (CMDPs) under both stochastic and adversarial constraints. We provide a novel algorithm whose guarantees greatly improve those of the state-of-the-art best-of-both-worlds algorithm introduced by Stradi et al. (2025). In the stochastic regime, \emph{i.e.}, when the constraints are sampled from fixed but unknown distributions, our method achieves O~(T)\widetilde{\mathcal{O}}(\sqrt{T}) regret and constraint violation without relying on Slater's condition, thereby handling settings where no strictly feasible solution exists. Moreover, we provide guarantees on the stronger notion of \emph{positive} constraint violation, which does not allow to recover from large violation in the early episodes by playing strictly safe policies. In the adversarial regime, \emph{i.e.}, when the constraints may change arbitrarily between episodes, our algorithm ensures sublinear constraint violation without Slater's condition, and achieves sublinear α\alpha-regret with respect to the \emph{unconstrained} optimum, where α\alpha is a suitably defined multiplicative approximation factor. We further validate our results through synthetic experiments, showing the practical effectiveness of our algorithm.

View on arXiv
Comments on this paper