163
v1v2 (latest)

A Reduction from Reinforcement Learning to No-Regret Online Learning

International Conference on Artificial Intelligence and Statistics (AISTATS), 2019
Abstract

We present a reduction from reinforcement learning (RL) to no-regret online learning based on the saddle-point formulation of RL, by which "any" online algorithm with sublinear regret can generate policies with provable performance guarantees. This new perspective decouples the RL problem into two parts: regret minimization and function approximation. The first part admits a standard online-learning analysis, and the second part can be quantified independently of the learning algorithm. Therefore, the proposed reduction can be used as a tool to systematically design new RL algorithms. We demonstrate this idea by devising a simple RL algorithm based on mirror descent and the generative-model oracle. For any γ\gamma-discounted tabular RL problem, with probability at least 1δ1-\delta, it learns an ϵ\epsilon-optimal policy using at most O~(SAlog(1δ)(1γ)4ϵ2)\tilde{O}\left(\frac{|\mathcal{S}||\mathcal{A}|\log(\frac{1}{\delta})}{(1-\gamma)^4\epsilon^2}\right) samples. Furthermore, this algorithm admits a direct extension to linearly parameterized function approximators for large-scale applications, with computation and sample complexities independent of S|\mathcal{S}|,A|\mathcal{A}|, though at the cost of potential approximation bias.

View on arXiv
Comments on this paper