83
0

Polynomial-Time Approximability of Constrained Reinforcement Learning

Abstract

We study the computational complexity of approximating general constrained Markov decision processes. Our primary contribution is the design of a polynomial time (0,ϵ)(0,\epsilon)-additive bicriteria approximation algorithm for finding optimal constrained policies across a broad class of recursively computable constraints, including almost-sure, chance, expectation, and their anytime variants. Matching lower bounds imply our approximation guarantees are optimal so long as PNPP \neq NP. The generality of our approach results in answers to several long-standing open complexity questions in the constrained reinforcement learning literature. Specifically, we are the first to prove polynomial-time approximability for the following settings: policies under chance constraints, deterministic policies under multiple expectation constraints, policies under non-homogeneous constraints (i.e., constraints of different types), and policies under constraints for continuous-state processes.

View on arXiv
@article{mcmahan2025_2502.07764,
  title={ Polynomial-Time Approximability of Constrained Reinforcement Learning },
  author={ Jeremy McMahan },
  journal={arXiv preprint arXiv:2502.07764},
  year={ 2025 }
}
Comments on this paper