5
0

Sample Complexity of Distributionally Robust Average-Reward Reinforcement Learning

Abstract

Motivated by practical applications where stable long-term performance is critical-such as robotics, operations research, and healthcare-we study the problem of distributionally robust (DR) average-reward reinforcement learning. We propose two algorithms that achieve near-optimal sample complexity. The first reduces the problem to a DR discounted Markov decision process (MDP), while the second, Anchored DR Average-Reward MDP, introduces an anchoring state to stabilize the controlled transition kernels within the uncertainty set. Assuming the nominal MDP is uniformly ergodic, we prove that both algorithms attain a sample complexity of O~(SAtmix2ε2)\widetilde{O}\left(|\mathbf{S}||\mathbf{A}| t_{\mathrm{mix}}^2\varepsilon^{-2}\right) for estimating the optimal policy as well as the robust average reward under KL and fkf_k-divergence-based uncertainty sets, provided the uncertainty radius is sufficiently small. Here, ε\varepsilon is the target accuracy, S|\mathbf{S}| and A|\mathbf{A}| denote the sizes of the state and action spaces, and tmixt_{\mathrm{mix}} is the mixing time of the nominal MDP. This represents the first finite-sample convergence guarantee for DR average-reward reinforcement learning. We further validate the convergence rates of our algorithms through numerical experiments.

View on arXiv
@article{chen2025_2505.10007,
  title={ Sample Complexity of Distributionally Robust Average-Reward Reinforcement Learning },
  author={ Zijun Chen and Shengbo Wang and Nian Si },
  journal={arXiv preprint arXiv:2505.10007},
  year={ 2025 }
}
Comments on this paper