Finite-Sample Analysis of Policy Evaluation for Robust Average Reward Reinforcement Learning
We present the first finite-sample analysis for policy evaluation in robust average-reward Markov Decision Processes (MDPs). Prior works in this setting have established only asymptotic convergence guarantees, leaving open the question of sample complexity. In this work, we address this gap by establishing that the robust Bellman operator is a contraction under the span semi-norm, and developing a stochastic approximation framework with controlled bias. Our approach builds upon Multi-Level Monte Carlo (MLMC) techniques to estimate the robust Bellman operator efficiently. To overcome the infinite expected sample complexity inherent in standard MLMC, we introduce a truncation mechanism based on a geometric distribution, ensuring a finite constant sample complexity while maintaining a small bias that decays exponentially with the truncation level. Our method achieves the order-optimal sample complexity of for robust policy evaluation and robust average reward estimation, marking a significant advancement in robust reinforcement learning theory.
View on arXiv@article{xu2025_2502.16816, title={ Finite-Sample Analysis of Policy Evaluation for Robust Average Reward Reinforcement Learning }, author={ Yang Xu and Washim Uddin Mondal and Vaneet Aggarwal }, journal={arXiv preprint arXiv:2502.16816}, year={ 2025 } }