Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2002.08396
Cited By
Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement Learning
19 February 2020
Noah Y. Siegel
Jost Tobias Springenberg
Felix Berkenkamp
A. Abdolmaleki
Michael Neunert
Thomas Lampe
Roland Hafner
Nicolas Heess
Martin Riedmiller
OffRL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement Learning"
20 / 70 papers shown
Title
Behavioral Priors and Dynamics Models: Improving Performance and Domain Transfer in Offline RL
Catherine Cang
Aravind Rajeswaran
Pieter Abbeel
Michael Laskin
OffRL
19
29
0
16 Jun 2021
Offline RL Without Off-Policy Evaluation
David Brandfonbrener
William F. Whitney
Rajesh Ranganath
Joan Bruna
OffRL
42
161
0
16 Jun 2021
A Minimalist Approach to Offline Reinforcement Learning
Scott Fujimoto
S. Gu
OffRL
20
778
0
12 Jun 2021
Benchmarks for Deep Off-Policy Evaluation
Justin Fu
Mohammad Norouzi
Ofir Nachum
George Tucker
Ziyun Wang
...
Yutian Chen
Aviral Kumar
Cosmin Paduraru
Sergey Levine
T. Paine
ELM
OffRL
35
100
0
30 Mar 2021
Efficient Deep Reinforcement Learning with Imitative Expert Priors for Autonomous Driving
Zhiyu Huang
Jingda Wu
Chen Lv
17
132
0
19 Mar 2021
Regularized Behavior Value Estimation
Çağlar Gülçehre
Sergio Gomez Colmenarejo
Ziyun Wang
Jakub Sygnowski
T. Paine
Konrad Zolna
Yutian Chen
Matthew W. Hoffman
Razvan Pascanu
Nando de Freitas
OffRL
23
37
0
17 Mar 2021
Instabilities of Offline RL with Pre-Trained Neural Representation
Ruosong Wang
Yifan Wu
Ruslan Salakhutdinov
Sham Kakade
OffRL
17
42
0
08 Mar 2021
Offline Reinforcement Learning with Pseudometric Learning
Robert Dadashi
Shideh Rezaeifar
Nino Vieillard
Léonard Hussenot
Olivier Pietquin
M. Geist
OffRL
31
40
0
02 Mar 2021
COMBO: Conservative Offline Model-Based Policy Optimization
Tianhe Yu
Aviral Kumar
Rafael Rafailov
Aravind Rajeswaran
Sergey Levine
Chelsea Finn
OffRL
219
413
0
16 Feb 2021
NeoRL: A Near Real-World Benchmark for Offline Reinforcement Learning
Rongjun Qin
Songyi Gao
Xingyuan Zhang
Zhen Xu
Shengkai Huang
Zewen Li
Weinan Zhang
Yang Yu
OffRL
132
78
0
01 Feb 2021
Is Pessimism Provably Efficient for Offline RL?
Ying Jin
Zhuoran Yang
Zhaoran Wang
OffRL
27
346
0
30 Dec 2020
Semi-supervised reward learning for offline reinforcement learning
Ksenia Konyushkova
Konrad Zolna
Y. Aytar
Alexander Novikov
Scott E. Reed
Serkan Cabi
Nando de Freitas
SSL
OffRL
68
23
0
12 Dec 2020
Offline Learning from Demonstrations and Unlabeled Experience
Konrad Zolna
Alexander Novikov
Ksenia Konyushkova
Çağlar Gülçehre
Ziyun Wang
Y. Aytar
Misha Denil
Nando de Freitas
Scott E. Reed
SSL
OffRL
32
66
0
27 Nov 2020
Behavior Priors for Efficient Reinforcement Learning
Dhruva Tirumala
Alexandre Galashov
Hyeonwoo Noh
Leonard Hasenclever
Razvan Pascanu
...
Guillaume Desjardins
Wojciech M. Czarnecki
Arun Ahuja
Yee Whye Teh
N. Heess
37
39
0
27 Oct 2020
Learning Dexterous Manipulation from Suboptimal Experts
Rae Jeong
Jost Tobias Springenberg
Jackie Kay
Daniel Zheng
Yuxiang Zhou
Alexandre Galashov
N. Heess
F. Nori
OffRL
10
36
0
16 Oct 2020
Learning Arbitrary-Goal Fabric Folding with One Hour of Real Robot Experience
Robert Lee
Daniel Ward
Akansel Cosgun
Vibhavari Dasagi
Peter Corke
Jurgen Leitner
SSL
22
66
0
07 Oct 2020
FOCAL: Efficient Fully-Offline Meta-Reinforcement Learning via Distance Metric Learning and Behavior Regularization
Lanqing Li
Rui Yang
Dijun Luo
OffRL
25
10
0
02 Oct 2020
Learning Off-Policy with Online Planning
Harshit S. Sikchi
Wenxuan Zhou
David Held
OffRL
29
45
0
23 Aug 2020
Critic Regularized Regression
Ziyun Wang
Alexander Novikov
Konrad Zolna
Jost Tobias Springenberg
Scott E. Reed
...
Noah Y. Siegel
J. Merel
Çağlar Gülçehre
N. Heess
Nando de Freitas
OffRL
36
317
0
26 Jun 2020
AWAC: Accelerating Online Reinforcement Learning with Offline Datasets
Ashvin Nair
Abhishek Gupta
Murtaza Dalal
Sergey Levine
OffRL
OnRL
12
585
0
16 Jun 2020
Previous
1
2