6
2

Online Estimation and Inference for Robust Policy Evaluation in Reinforcement Learning

Abstract

Reinforcement learning has emerged as one of the prominent topics attracting attention in modern statistical learning, with policy evaluation being a key component. Unlike the traditional machine learning literature on this topic, our work emphasizes statistical inference for the model parameters and value functions of reinforcement learning algorithms. While most existing analyses assume random rewards to follow standard distributions, we embrace the concept of robust statistics in reinforcement learning by simultaneously addressing issues of outlier contamination and heavy-tailed rewards within a unified framework. In this paper, we develop a fully online robust policy evaluation procedure, and establish the Bahadur-type representation of our estimator. Furthermore, we develop an online procedure to efficiently conduct statistical inference based on the asymptotic distribution. This paper connects robust statistics and statistical inference in reinforcement learning, offering a more versatile and reliable approach to online policy evaluation. Finally, we validate the efficacy of our algorithm through numerical experiments conducted in simulations and real-world reinforcement learning experiments.

View on arXiv
@article{liu2025_2310.02581,
  title={ Online Estimation and Inference for Robust Policy Evaluation in Reinforcement Learning },
  author={ Weidong Liu and Jiyuan Tu and Xi Chen and Yichen Zhang },
  journal={arXiv preprint arXiv:2310.02581},
  year={ 2025 }
}
Comments on this paper