8
12

Sample Complexity of Variance-reduced Distributionally Robust Q-learning

Abstract

Dynamic decision making under distributional shifts is of fundamental interest in theory and applications of reinforcement learning: The distribution of the environment on which the data is collected can differ from that of the environment on which the model is deployed. This paper presents two novel model-free algorithms, namely the distributionally robust Q-learning and its variance-reduced counterpart, that can effectively learn a robust policy despite distributional shifts. These algorithms are designed to efficiently approximate the qq-function of an infinite-horizon γ\gamma-discounted robust Markov decision process with Kullback-Leibler uncertainty set to an entry-wise ϵ\epsilon-degree of precision. Further, the variance-reduced distributionally robust Q-learning combines the synchronous Q-learning with variance-reduction techniques to enhance its performance. Consequently, we establish that it attains a minmax sample complexity upper bound of O~(SA(1γ)4ϵ2)\tilde O(|S||A|(1-\gamma)^{-4}\epsilon^{-2}), where SS and AA denote the state and action spaces. This is the first complexity result that is independent of the uncertainty size δ\delta, thereby providing new complexity theoretic insights. Additionally, a series of numerical experiments confirm the theoretical findings and the efficiency of the algorithms in handling distributional shifts.

View on arXiv
Comments on this paper