Primal-Dual Learning: Sample Complexity and Sublinear Run Time for Ergodic Markov Decision Problems

Consider the problem of approximating the optimal policy of a Markov decision process (MDP) by sampling state transitions. In contrast to existing reinforcement learning methods that are based on successive approximations to the nonlinear Bellman equation, we propose a Primal-Dual Learning method in light of the linear duality between the value and policy. The learning method is model-free and makes primal-dual updates to the policy and value vectors as new data are revealed. For infinite-horizon undiscounted Markov decision process with finite state space and finite action space , the learning method finds an -optimal policy using the following number of sample transitions \tilde{O}( \frac{(\tau\cdot t^*_{mix})^2 |S| |A| }{\epsilon^2} ), where is an upper bound of mixing times across all policies and is a parameter characterizing the range of stationary distributions across policies. The learning method also applies to the computational problem of MDP where the transition probabilities and rewards are explicitly given as the input. In the case where each state transition can be sampled in time, the learning method gives a sublinear-time algorithm for solving the averaged-reward MDP.
View on arXiv