We study offline constrained reinforcement learning (RL) with general function approximation. We aim to learn a policy from a pre-collected dataset that maximizes the expected discounted cumulative reward for a primary reward signal while ensuring that expected discounted returns for multiple auxiliary reward signals are above predefined thresholds. Existing algorithms either require fully exploratory data, are computationally inefficient, or depend on an additional auxiliary function classes to obtain an -optimal policy with sample complexity . In this paper, we propose an oracle-efficient primal-dual algorithm based on a linear programming (LP) formulation, achieving sample complexity under partial data coverage. By introducing a realizability assumption, our approach ensures that all saddle points of the Lagrangian are optimal, removing the need for regularization that complicated prior analyses. Through Lagrangian decomposition, our method extracts policies without requiring knowledge of the data-generating distribution, enhancing practical applicability.
View on arXiv@article{hong2025_2505.17506, title={ Offline Constrained Reinforcement Learning under Partial Data Coverage }, author={ Kihyuk Hong and Ambuj Tewari }, journal={arXiv preprint arXiv:2505.17506}, year={ 2025 } }