Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2405.13861
Cited By
Transformers Can Learn Temporal Difference Methods for In-Context Reinforcement Learning
22 May 2024
Jiuqi Wang
Ethan Blaser
Hadi Daneshmand
Shangtong Zhang
OffRL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Transformers Can Learn Temporal Difference Methods for In-Context Reinforcement Learning"
7 / 7 papers shown
Title
Do LLM Agents Have Regret? A Case Study in Online Learning and Games
Chanwoo Park
Xiangyu Liu
Asuman Ozdaglar
Kaiqing Zhang
69
15
0
25 Mar 2024
Can large language models explore in-context?
Akshay Krishnamurthy
Keegan Harris
Dylan J. Foster
Cyril Zhang
Aleksandrs Slivkins
LM&Ro
LLMAG
LRM
118
20
0
22 Mar 2024
Generalization to New Sequential Decision Making Tasks with In-Context Learning
Sharath Chandra Raparthy
Eric Hambro
Robert Kirk
Mikael Henaff
Roberta Raileanu
OffRL
100
20
0
06 Dec 2023
Do Transformers Parse while Predicting the Masked Word?
Haoyu Zhao
A. Panigrahi
Rong Ge
Sanjeev Arora
74
29
0
14 Mar 2023
Structured State Space Models for In-Context Reinforcement Learning
Chris Xiaoxuan Lu
Yannick Schroecker
Albert Gu
Emilio Parisotto
Jakob N. Foerster
Satinder Singh
Feryal M. P. Behbahani
AI4TS
84
80
0
07 Mar 2023
Large Language Models can Implement Policy Iteration
Ethan A. Brooks
Logan Walls
Richard L. Lewis
Satinder Singh
LM&Ro
OffRL
124
21
0
07 Oct 2022
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
243
11,568
0
09 Mar 2017
1