Sample Complexity of Distributionally Robust Average-Reward Reinforcement Learning

Motivated by practical applications where stable long-term performance is critical-such as robotics, operations research, and healthcare-we study the problem of distributionally robust (DR) average-reward reinforcement learning. We propose two algorithms that achieve near-optimal sample complexity. The first reduces the problem to a DR discounted Markov decision process (MDP), while the second, Anchored DR Average-Reward MDP, introduces an anchoring state to stabilize the controlled transition kernels within the uncertainty set. Assuming the nominal MDP is uniformly ergodic, we prove that both algorithms attain a sample complexity of for estimating the optimal policy as well as the robust average reward under KL and -divergence-based uncertainty sets, provided the uncertainty radius is sufficiently small. Here, is the target accuracy, and denote the sizes of the state and action spaces, and is the mixing time of the nominal MDP. This represents the first finite-sample convergence guarantee for DR average-reward reinforcement learning. We further validate the convergence rates of our algorithms through numerical experiments.
View on arXiv@article{chen2025_2505.10007, title={ Sample Complexity of Distributionally Robust Average-Reward Reinforcement Learning }, author={ Zijun Chen and Shengbo Wang and Nian Si }, journal={arXiv preprint arXiv:2505.10007}, year={ 2025 } }