Sample Complexity Bounds for Linear Constrained MDPs with a Generative Model

We consider infinite-horizon -discounted (linear) constrained Markov decision processes (CMDPs) where the objective is to find a policy that maximizes the expected cumulative reward subject to expected cumulative constraints. Given access to a generative model, we propose to solve CMDPs with a primal-dual framework that can leverage any black-box unconstrained MDP solver. For linear CMDPs with feature dimension , we instantiate the framework by using mirror descent value iteration (\texttt{MDVI})~\citep{kitamura2023regularization} an example MDP solver. We provide sample complexity bounds for the resulting CMDP algorithm in two cases: (i) relaxed feasibility, where small constraint violations are allowed, and (ii) strict feasibility, where the output policy is required to exactly satisfy the constraint. For (i), we prove that the algorithm can return an -optimal policy with high probability by using samples. We note that these results exhibit a near-optimal dependence on both and . For (ii), we show that the algorithm requires samples, where is the problem-dependent Slater constant that characterizes the size of the feasible region. Finally, we instantiate our framework for tabular CMDPs and show that it can be used to recover near-optimal sample complexities in this setting.
View on arXiv@article{liu2025_2507.02089, title={ Sample Complexity Bounds for Linear Constrained MDPs with a Generative Model }, author={ Xingtu Liu and Lin F. Yang and Sharan Vaswani }, journal={arXiv preprint arXiv:2507.02089}, year={ 2025 } }