ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.03030
217
107
v1v2 (latest)

Meta-learning of Sequential Strategies

8 May 2019
Pedro A. Ortega
Jane X. Wang
Mark Rowland
Tim Genewein
Z. Kurth-Nelson
Razvan Pascanu
N. Heess
J. Veness
Alex Pritzel
Pablo Sprechmann
Siddhant M. Jayakumar
Tom McGrath
Kevin J. Miller
M. G. Azar
Ian Osband
Neil C. Rabinowitz
András Gyorgy
Silvia Chiappa
Simon Osindero
Yee Whye Teh
H. V. Hasselt
Nando de Freitas
M. Botvinick
Shane Legg
    OffRL
ArXiv (abs)PDFHTML
Abstract

In this report we review memory-based meta-learning as a tool for building sample-efficient strategies that learn from past experience to adapt to any task within a target class. Our goal is to equip the reader with the conceptual foundations of this tool for building new, scalable agents that operate on broad domains. To do so, we present basic algorithmic templates for building near-optimal predictors and reinforcement learners which behave as if they had a probabilistic model that allowed them to efficiently exploit task structure. Furthermore, we recast memory-based meta-learning within a Bayesian framework, showing that the meta-learned strategies are near-optimal because they amortize Bayes-filtered data, where the adaptation is implemented in the memory dynamics as a state-machine of sufficient statistics. Essentially, memory-based meta-learning translates the hard problem of probabilistic sequential inference into a regression problem.

View on arXiv
Comments on this paper