Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2306.14479
Cited By
Design from Policies: Conservative Test-Time Adaptation for Offline Policy Optimization
26 June 2023
Jinxin Liu
Hongyin Zhang
Zifeng Zhuang
Yachen Kang
Donglin Wang
Bin Wang
OffRL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Design from Policies: Conservative Test-Time Adaptation for Offline Policy Optimization"
9 / 9 papers shown
Title
Sample-efficient Unsupervised Policy Cloning from Ensemble Self-supervised Labeled Videos
Xin Liu
Yaran Chen
Haoran Li
SSL
94
0
0
14 Dec 2024
Planning with Diffusion for Flexible Behavior Synthesis
Michael Janner
Yilun Du
J. Tenenbaum
Sergey Levine
DiffM
202
622
0
20 May 2022
Offline Reinforcement Learning with Implicit Q-Learning
Ilya Kostrikov
Ashvin Nair
Sergey Levine
OffRL
212
832
0
12 Oct 2021
Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble
Gaon An
Seungyong Moon
Jang-Hyun Kim
Hyun Oh Song
OffRL
95
261
0
04 Oct 2021
BRAC+: Improved Behavior Regularized Actor Critic for Offline Reinforcement Learning
Chi Zhang
S. Kuppannagari
Viktor Prasanna
OffRL
41
15
0
02 Oct 2021
Conservative Offline Distributional Reinforcement Learning
Yecheng Jason Ma
Dinesh Jayaraman
Osbert Bastani
OffRL
65
78
0
12 Jul 2021
COMBO: Conservative Offline Model-Based Policy Optimization
Tianhe Yu
Aviral Kumar
Rafael Rafailov
Aravind Rajeswaran
Sergey Levine
Chelsea Finn
OffRL
214
413
0
16 Feb 2021
Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation
Justin Fu
Sergey Levine
OffRL
62
46
0
16 Feb 2021
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine
Aviral Kumar
George Tucker
Justin Fu
OffRL
GP
329
1,944
0
04 May 2020
1