Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2210.03802
Cited By
Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization
7 October 2022
Jihwan Jeong
Xiaoyu Wang
Michael Gimelfarb
Hyunwoo J. Kim
Baher Abdulhai
Scott Sanner
OffRL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization"
9 / 9 papers shown
Title
Deterministic Uncertainty Propagation for Improved Model-Based Offline Reinforcement Learning
Abdullah Akgul
Manuel Haußmann
M. Kandemir
OffRL
64
1
0
17 Jan 2025
SAMBO-RL: Shifts-aware Model-based Offline Reinforcement Learning
Wang Luo
Haoran Li
Zicheng Zhang
Congying Han
Jiayu Lv
Tiande Guo
OffRL
33
1
0
23 Aug 2024
Residual Learning and Context Encoding for Adaptive Offline-to-Online Reinforcement Learning
Mohammadreza Nakhaei
Aidan Scannell
J. Pajarinen
OffRL
43
1
0
12 Jun 2024
Trust the Model Where It Trusts Itself -- Model-Based Actor-Critic with Uncertainty-Aware Rollout Adaption
Bernd Frauenknecht
Artur Eisele
Devdutt Subhasish
Friedrich Solowjow
Sebastian Trimpe
39
4
0
29 May 2024
Offline Reinforcement Learning with Implicit Q-Learning
Ilya Kostrikov
Ashvin Nair
Sergey Levine
OffRL
206
832
0
12 Oct 2021
Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble
Gaon An
Seungyong Moon
Jang-Hyun Kim
Hyun Oh Song
OffRL
95
261
0
04 Oct 2021
Offline Reinforcement Learning with Reverse Model-based Imagination
Jianhao Wang
Wenzhe Li
Haozhe Jiang
Guangxiang Zhu
Siyuan Li
Chongjie Zhang
OffRL
89
59
0
01 Oct 2021
COMBO: Conservative Offline Model-Based Policy Optimization
Tianhe Yu
Aviral Kumar
Rafael Rafailov
Aravind Rajeswaran
Sergey Levine
Chelsea Finn
OffRL
197
412
0
16 Feb 2021
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine
Aviral Kumar
George Tucker
Justin Fu
OffRL
GP
321
1,944
0
04 May 2020
1