ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.06402
  4. Cited By
Hierarchical Transformers are Efficient Meta-Reinforcement Learners

Hierarchical Transformers are Efficient Meta-Reinforcement Learners

9 February 2024
Gresa Shala
André Biedenkapp
Josif Grabocka
    OffRL
ArXivPDFHTML

Papers citing "Hierarchical Transformers are Efficient Meta-Reinforcement Learners"

3 / 3 papers shown
Title
AMAGO-2: Breaking the Multi-Task Barrier in Meta-Reinforcement Learning with Transformers
Jake Grigsby
Justin Sasek
Samyak Parajuli
Daniel Adebi
Amy Zhang
Yuke Zhu
OffRL
20
2
0
17 Nov 2024
XLand-100B: A Large-Scale Multi-Task Dataset for In-Context Reinforcement Learning
XLand-100B: A Large-Scale Multi-Task Dataset for In-Context Reinforcement Learning
Alexander Nikulin
Ilya Zisman
Alexey Zemtsov
Viacheslav Sinii
99
4
0
13 Jun 2024
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
243
11,568
0
09 Mar 2017
1