214

A Theoretical Understanding of Gradient Bias in Meta-Reinforcement Learning

Neural Information Processing Systems (NeurIPS), 2021
Abstract

Gradient-based Meta-RL (GMRL) refers to methods that maintain two-level optimisation procedures wherein the outer-loop meta-learner guides the inner-loop gradient-based reinforcement learner to achieve fast adaptations. In this paper, we develop a unified framework that describes variations of GMRL algorithms and points out that existing stochastic meta-gradient estimators adopted by GMRL are actually \textbf{biased}. Such meta-gradient bias comes from two sources: 1) the compositional bias incurred by the two-level problem structure, which has an upper bound of O(KαKσ^Inτ0.5)\mathcal{O}\big(K\alpha^{K}\hat{\sigma}_{\text{In}}|\tau|^{-0.5}\big) \emph{w.r.t.} inner-loop update step KK, learning rate α\alpha, estimate variance σ^In2\hat{\sigma}^{2}_{\text{In}} and sample size τ|\tau|, and 2) the multi-step Hessian estimation bias Δ^H\hat{\Delta}_{H} due to the use of autodiff, which has a polynomial impact O((K1)(Δ^H)K1)\mathcal{O}\big((K-1)(\hat{\Delta}_{H})^{K-1}\big) on the meta-gradient bias. We study tabular MDPs empirically and offer quantitative evidence that testifies our theoretical findings on existing stochastic meta-gradient estimators. Furthermore, we conduct experiments on Iterated Prisoner's Dilemma and Atari games to show how other methods such as off-policy learning and low-bias estimator can help fix the gradient bias for GMRL algorithms in general.

View on arXiv
Comments on this paper