11
21

Nearly Minimax Optimal Regret for Learning Infinite-horizon Average-reward MDPs with Linear Function Approximation

Abstract

We study reinforcement learning in an infinite-horizon average-reward setting with linear function approximation, where the transition probability function of the underlying Markov Decision Process (MDP) admits a linear form over a feature mapping of the current state, action, and next state. We propose a new algorithm UCRL2-VTR, which can be seen as an extension of the UCRL2 algorithm with linear function approximation. We show that UCRL2-VTR with Bernstein-type bonus can achieve a regret of O~(dDT)\tilde{O}(d\sqrt{DT}), where dd is the dimension of the feature mapping, TT is the horizon, and D\sqrt{D} is the diameter of the MDP. We also prove a matching lower bound Ω~(dDT)\tilde{\Omega}(d\sqrt{DT}), which suggests that the proposed UCRL2-VTR is minimax optimal up to logarithmic factors. To the best of our knowledge, our algorithm is the first nearly minimax optimal RL algorithm with function approximation in the infinite-horizon average-reward setting.

View on arXiv
Comments on this paper