ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.15400
206
12
v1v2v3v4 (latest)

A Theoretical Understanding of Gradient Bias in Meta-Reinforcement Learning

Neural Information Processing Systems (NeurIPS), 2021
31 December 2021
Xidong Feng
Bo Liu
Jie Ren
Luo Mai
Rui Zhu
Haifeng Zhang
Jun Wang
Yaodong Yang
ArXiv (abs)PDFHTML
Abstract

Gradient-based Meta-RL (GMRL) refers to methods that maintain two-level optimisation procedures wherein the outer-loop meta-learner guides the inner-loop gradient-based reinforcement learner to achieve fast adaptations. In this paper, we develop a unified framework that describes variations of GMRL algorithms and points out that existing stochastic meta-gradient estimators adopted by GMRL are actually \textbf{biased}. Such meta-gradient bias comes from two sources: 1) the compositional bias incurred by the two-level problem structure, which has an upper bound of O(KαKσ^In∣τ∣−0.5)\mathcal{O}\big(K\alpha^{K}\hat{\sigma}_{\text{In}}|\tau|^{-0.5}\big)O(KαKσ^In​∣τ∣−0.5) \emph{w.r.t.} inner-loop update step KKK, learning rate α\alphaα, estimate variance σ^In2\hat{\sigma}^{2}_{\text{In}}σ^In2​ and sample size ∣τ∣|\tau|∣τ∣, and 2) the multi-step Hessian estimation bias Δ^H\hat{\Delta}_{H}Δ^H​ due to the use of autodiff, which has a polynomial impact O((K−1)(Δ^H)K−1)\mathcal{O}\big((K-1)(\hat{\Delta}_{H})^{K-1}\big)O((K−1)(Δ^H​)K−1) on the meta-gradient bias. We study tabular MDPs empirically and offer quantitative evidence that testifies our theoretical findings on existing stochastic meta-gradient estimators. Furthermore, we conduct experiments on Iterated Prisoner's Dilemma and Atari games to show how other methods such as off-policy learning and low-bias estimator can help fix the gradient bias for GMRL algorithms in general.

View on arXiv
Comments on this paper