We develop several provably efficient model-free reinforcement learning (RL) algorithms for infinite-horizon average-reward Markov Decision Processes (MDPs). We consider both online setting and the setting with access to a simulator. In the online setting, we propose model-free RL algorithms based on reference-advantage decomposition. Our algorithm achieves regret after steps, where is the size of state-action space, and the span of the optimal bias function. Our results are the first to achieve optimal dependence in for weakly communicating MDPs. In the simulator setting, we propose a model-free RL algorithm that finds an -optimal policy using samples, whereas the minimax lower bound is . Our results are based on two new techniques that are unique in the average-reward setting: 1) better discounted approximation by value-difference estimation; 2) efficient construction of confidence region for the optimal bias function with space complexity .
View on arXiv