10
0

Mirror Descent Policy Optimisation for Robust Constrained Markov Decision Processes

David Bossens
Atsushi Nitanda
Main:22 Pages
20 Figures
Bibliography:3 Pages
12 Tables
Appendix:17 Pages
Abstract

Safety is an essential requirement for reinforcement learning systems. The newly emerging framework of robust constrained Markov decision processes allows learning policies that satisfy long-term constraints while providing guarantees under epistemic uncertainty. This paper presents mirror descent policy optimisation for robust constrained Markov decision processes (RCMDPs), making use of policy gradient techniques to optimise both the policy (as a maximiser) and the transition kernel (as an adversarial minimiser) on the Lagrangian representing a constrained MDP. In the oracle-based RCMDP setting, we obtain an O(1T)\mathcal{O}\left(\frac{1}{T}\right) convergence rate for the squared distance as a Bregman divergence, and an O(eT)\mathcal{O}\left(e^{-T}\right) convergence rate for entropy-regularised objectives. In the sample-based RCMDP setting, we obtain an O~(1T1/3)\tilde{\mathcal{O}}\left(\frac{1}{T^{1/3}}\right) convergence rate. Experiments confirm the benefits of mirror descent policy optimisation in constrained and unconstrained optimisation, and significant improvements are observed in robustness tests when compared to baseline policy optimisation algorithms.

View on arXiv
@article{bossens2025_2506.23165,
  title={ Mirror Descent Policy Optimisation for Robust Constrained Markov Decision Processes },
  author={ David Bossens and Atsushi Nitanda },
  journal={arXiv preprint arXiv:2506.23165},
  year={ 2025 }
}
Comments on this paper