8
0

Sample Complexity Bounds for Linear Constrained MDPs with a Generative Model

Xingtu Liu
Lin F. Yang
Sharan Vaswani
Main:9 Pages
Bibliography:4 Pages
Appendix:38 Pages
Abstract

We consider infinite-horizon γ\gamma-discounted (linear) constrained Markov decision processes (CMDPs) where the objective is to find a policy that maximizes the expected cumulative reward subject to expected cumulative constraints. Given access to a generative model, we propose to solve CMDPs with a primal-dual framework that can leverage any black-box unconstrained MDP solver. For linear CMDPs with feature dimension dd, we instantiate the framework by using mirror descent value iteration (\texttt{MDVI})~\citep{kitamura2023regularization} an example MDP solver. We provide sample complexity bounds for the resulting CMDP algorithm in two cases: (i) relaxed feasibility, where small constraint violations are allowed, and (ii) strict feasibility, where the output policy is required to exactly satisfy the constraint. For (i), we prove that the algorithm can return an ϵ\epsilon-optimal policy with high probability by using O~(d2(1γ)4ϵ2)\tilde{O}\left(\frac{d^2}{(1-\gamma)^4\epsilon^2}\right) samples. We note that these results exhibit a near-optimal dependence on both dd and ϵ\epsilon. For (ii), we show that the algorithm requires O~(d2(1γ)6ϵ2ζ2)\tilde{O}\left(\frac{d^2}{(1-\gamma)^6\epsilon^2\zeta^2}\right) samples, where ζ\zeta is the problem-dependent Slater constant that characterizes the size of the feasible region. Finally, we instantiate our framework for tabular CMDPs and show that it can be used to recover near-optimal sample complexities in this setting.

View on arXiv
@article{liu2025_2507.02089,
  title={ Sample Complexity Bounds for Linear Constrained MDPs with a Generative Model },
  author={ Xingtu Liu and Lin F. Yang and Sharan Vaswani },
  journal={arXiv preprint arXiv:2507.02089},
  year={ 2025 }
}
Comments on this paper