ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.04088
  4. Cited By
Deterministic Uncertainty Propagation for Improved Model-Based Offline Reinforcement Learning
v1v2v3 (latest)

Deterministic Uncertainty Propagation for Improved Model-Based Offline Reinforcement Learning

Neural Information Processing Systems (NeurIPS), 2024
17 January 2025
Abdullah Akgul
Manuel Haußmann
M. Kandemir
    OffRL
ArXiv (abs)PDFHTML

Papers citing "Deterministic Uncertainty Propagation for Improved Model-Based Offline Reinforcement Learning"

44 / 44 papers shown
An Analytic Solution to Covariance Propagation in Neural Networks
An Analytic Solution to Covariance Propagation in Neural Networks
Oren Wright
Yorie Nakahira
José M. F. Moura
220
9
0
24 Mar 2024
Simple Ingredients for Offline Reinforcement Learning
Simple Ingredients for Offline Reinforcement Learning
Edoardo Cetin
Andrea Tirinzoni
Matteo Pirotta
A. Lazaric
Yann Ollivier
Ahmed Touati
OffRL
319
2
0
19 Mar 2024
Harnessing Mixed Offline Reinforcement Learning Datasets via Trajectory
  Weighting
Harnessing Mixed Offline Reinforcement Learning Datasets via Trajectory WeightingInternational Conference on Learning Representations (ICLR), 2023
Zhang-Wei Hong
Pulkit Agrawal
Rémi Tachet des Combes
Romain Laroche
OffRL
202
25
0
22 Jun 2023
On the 1-Wasserstein Distance between Location-Scale Distributions and
  the Effect of Differential Privacy
On the 1-Wasserstein Distance between Location-Scale Distributions and the Effect of Differential Privacy
Saurab Chhachhi
Fei Teng
234
9
0
28 Apr 2023
Model-Based Uncertainty in Value Functions
Model-Based Uncertainty in Value FunctionsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2023
Carlos E. Luis
A. Bottero
Julia Vinogradska
Felix Berkenkamp
Jan Peters
251
17
0
24 Feb 2023
Some Fundamental Aspects about Lipschitz Continuity of Neural Networks
Some Fundamental Aspects about Lipschitz Continuity of Neural NetworksInternational Conference on Learning Representations (ICLR), 2023
Grigory Khromov
Sidak Pal Singh
570
22
0
21 Feb 2023
Conservative Bayesian Model-Based Value Expansion for Offline Policy
  Optimization
Conservative Bayesian Model-Based Value Expansion for Offline Policy OptimizationInternational Conference on Learning Representations (ICLR), 2022
Jihwan Jeong
Xiaoyu Wang
Michael Gimelfarb
Hyunwoo J. Kim
Baher Abdulhai
Scott Sanner
OffRL
188
13
0
07 Oct 2022
Transformers are Sample-Efficient World Models
Transformers are Sample-Efficient World ModelsInternational Conference on Learning Representations (ICLR), 2022
Vincent Micheli
Eloi Alonso
Franccois Fleuret
VLMOffRL
479
256
0
01 Sep 2022
Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement
  Learning
Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement LearningInternational Conference on Learning Representations (ICLR), 2022
Chenjia Bai
Lingxiao Wang
Zhuoran Yang
Zhihong Deng
Animesh Garg
Peng Liu
Zhaoran Wang
OffRL
266
156
0
23 Feb 2022
Adversarially Trained Actor Critic for Offline Reinforcement Learning
Adversarially Trained Actor Critic for Offline Reinforcement LearningInternational Conference on Machine Learning (ICML), 2022
Ching-An Cheng
Tengyang Xie
Nan Jiang
Alekh Agarwal
OffRL
295
146
0
05 Feb 2022
Revisiting Design Choices in Offline Model-Based Reinforcement Learning
Revisiting Design Choices in Offline Model-Based Reinforcement LearningInternational Conference on Learning Representations (ICLR), 2021
Cong Lu
Philip J. Ball
Jack Parker-Holder
Michael A. Osborne
Stephen J. Roberts
OffRL
233
59
0
08 Oct 2021
Uncertainty-Based Offline Reinforcement Learning with Diversified
  Q-Ensemble
Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble
Gaon An
Seungyong Moon
Jang-Hyun Kim
Hyun Oh Song
OffRL
314
339
0
04 Oct 2021
Pessimistic Model-based Offline Reinforcement Learning under Partial
  Coverage
Pessimistic Model-based Offline Reinforcement Learning under Partial Coverage
Masatoshi Uehara
Wen Sun
OffRL
397
161
0
13 Jul 2021
Bellman-consistent Pessimism for Offline Reinforcement Learning
Bellman-consistent Pessimism for Offline Reinforcement LearningNeural Information Processing Systems (NeurIPS), 2021
Tengyang Xie
Ching-An Cheng
Nan Jiang
Paul Mineiro
Alekh Agarwal
OffRLLRM
659
301
0
13 Jun 2021
A Minimalist Approach to Offline Reinforcement Learning
A Minimalist Approach to Offline Reinforcement LearningNeural Information Processing Systems (NeurIPS), 2021
Scott Fujimoto
S. Gu
OffRL
387
982
0
12 Jun 2021
Offline Reinforcement Learning as One Big Sequence Modeling Problem
Offline Reinforcement Learning as One Big Sequence Modeling ProblemNeural Information Processing Systems (NeurIPS), 2021
Michael Janner
Qiyang Li
Sergey Levine
OffRL
645
787
0
03 Jun 2021
Offline Reinforcement Learning with Fisher Divergence Critic
  Regularization
Offline Reinforcement Learning with Fisher Divergence Critic RegularizationInternational Conference on Machine Learning (ICML), 2021
Ilya Kostrikov
Jonathan Tompson
Rob Fergus
Ofir Nachum
OffRL
307
336
0
14 Mar 2021
COMBO: Conservative Offline Model-Based Policy Optimization
COMBO: Conservative Offline Model-Based Policy OptimizationNeural Information Processing Systems (NeurIPS), 2021
Tianhe Yu
Aviral Kumar
Rafael Rafailov
Aravind Rajeswaran
Sergey Levine
Chelsea Finn
OffRL
622
476
0
16 Feb 2021
Is Pessimism Provably Efficient for Offline RL?
Is Pessimism Provably Efficient for Offline RL?International Conference on Machine Learning (ICML), 2020
Ying Jin
Zhuoran Yang
Zhaoran Wang
OffRL
617
389
0
30 Dec 2020
COG: Connecting New Skills to Past Experience with Offline Reinforcement
  Learning
COG: Connecting New Skills to Past Experience with Offline Reinforcement Learning
Avi Singh
Albert Yu
Jonathan Yang
Jesse Zhang
Aviral Kumar
Sergey Levine
SSLOffRLOnRL
266
109
0
27 Oct 2020
Selective Dyna-style Planning Under Limited Model Capacity
Selective Dyna-style Planning Under Limited Model Capacity
Zaheer Abbas
Samuel Sokota
Erin J. Talvitie
Martha White
302
35
0
05 Jul 2020
A Deterministic Approximation to Neural SDEs
A Deterministic Approximation to Neural SDEs
Andreas Look
M. Kandemir
Barbara Rakitsch
Jan Peters
DiffM
488
5
0
16 Jun 2020
Conservative Q-Learning for Offline Reinforcement Learning
Conservative Q-Learning for Offline Reinforcement Learning
Aviral Kumar
Aurick Zhou
George Tucker
Sergey Levine
OffRLOnRL
432
2,196
0
08 Jun 2020
MOPO: Model-based Offline Policy Optimization
MOPO: Model-based Offline Policy OptimizationNeural Information Processing Systems (NeurIPS), 2020
Tianhe Yu
G. Thomas
Lantao Yu
Stefano Ermon
James Zou
Sergey Levine
Chelsea Finn
Tengyu Ma
OffRL
649
861
0
27 May 2020
MOReL : Model-Based Offline Reinforcement Learning
MOReL : Model-Based Offline Reinforcement Learning
Rahul Kidambi
Aravind Rajeswaran
Praneeth Netrapalli
Thorsten Joachims
OffRL
457
751
0
12 May 2020
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on
  Open Problems
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine
Aviral Kumar
George Tucker
Justin Fu
OffRLGP
1.0K
2,346
0
04 May 2020
D4RL: Datasets for Deep Data-Driven Reinforcement Learning
D4RL: Datasets for Deep Data-Driven Reinforcement Learning
Justin Fu
Aviral Kumar
Ofir Nachum
George Tucker
Sergey Levine
GPOffRL
1.1K
1,605
0
15 Apr 2020
Keep Doing What Worked: Behavioral Modelling Priors for Offline
  Reinforcement Learning
Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement LearningInternational Conference on Learning Representations (ICLR), 2020
Noah Y. Siegel
Jost Tobias Springenberg
Felix Berkenkamp
A. Abdolmaleki
Michael Neunert
Thomas Lampe
Agrim Gupta
Nicolas Heess
Martin Riedmiller
OffRL
231
292
0
19 Feb 2020
Behavior Regularized Offline Reinforcement Learning
Behavior Regularized Offline Reinforcement Learning
Yifan Wu
George Tucker
Ofir Nachum
OffRL
465
764
0
26 Nov 2019
Advantage-Weighted Regression: Simple and Scalable Off-Policy
  Reinforcement Learning
Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning
Xue Bin Peng
Aviral Kumar
Grace Zhang
Sergey Levine
OffRL
750
681
0
01 Oct 2019
Deep Active Learning with Adaptive Acquisition
Deep Active Learning with Adaptive AcquisitionInternational Joint Conference on Artificial Intelligence (IJCAI), 2019
Manuel Haussmann
Fred Hamprecht
M. Kandemir
187
41
0
27 Jun 2019
When to Trust Your Model: Model-Based Policy Optimization
When to Trust Your Model: Model-Based Policy OptimizationNeural Information Processing Systems (NeurIPS), 2019
Michael Janner
Justin Fu
Marvin Zhang
Sergey Levine
OffRL
372
1,081
0
19 Jun 2019
Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction
Stabilizing Off-Policy Q-Learning via Bootstrapping Error ReductionNeural Information Processing Systems (NeurIPS), 2019
Aviral Kumar
Justin Fu
George Tucker
Sergey Levine
OffRLOnRL
364
1,180
0
03 Jun 2019
Learning When-to-Treat Policies
Learning When-to-Treat PoliciesJournal of the American Statistical Association (JASA), 2019
Xinkun Nie
Emma Brunskill
Stefan Wager
CMLOffRL
238
98
0
23 May 2019
Soft Actor-Critic Algorithms and Applications
Soft Actor-Critic Algorithms and Applications
Tuomas Haarnoja
Aurick Zhou
Kristian Hartikainen
George Tucker
Sehoon Ha
...
Vikash Kumar
Henry Zhu
Abhishek Gupta
Pieter Abbeel
Sergey Levine
1.1K
2,858
0
13 Dec 2018
Off-Policy Deep Reinforcement Learning without Exploration
Off-Policy Deep Reinforcement Learning without Exploration
Scott Fujimoto
David Meger
Doina Precup
OffRLBDL
749
1,843
0
07 Dec 2018
Lightweight Probabilistic Deep Networks
Lightweight Probabilistic Deep Networks
Jochen Gast
Stefan Roth
UQCVOODBDL
224
196
0
29 May 2018
Model-Based Value Estimation for Efficient Model-Free Reinforcement
  Learning
Model-Based Value Estimation for Efficient Model-Free Reinforcement Learning
Vladimir Feinberg
Alvin Wan
Ion Stoica
Sai Li
Joseph E. Gonzalez
Sergey Levine
OffRL
293
334
0
28 Feb 2018
Addressing Function Approximation Error in Actor-Critic Methods
Addressing Function Approximation Error in Actor-Critic MethodsInternational Conference on Machine Learning (ICML), 2018
Scott Fujimoto
H. V. Hoof
David Meger
OffRL
797
6,193
0
26 Feb 2018
A Note on Concentration Inequalities for U-Statistics
A Note on Concentration Inequalities for U-Statistics
Yannik Pitcan
80
14
0
17 Dec 2017
The Uncertainty Bellman Equation and Exploration
The Uncertainty Bellman Equation and Exploration
Brendan O'Donoghue
Ian Osband
Rémi Munos
Volodymyr Mnih
316
209
0
15 Sep 2017
Simple and Scalable Predictive Uncertainty Estimation using Deep
  Ensembles
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCVBDL
1.4K
6,735
0
05 Dec 2016
Batch Normalization: Accelerating Deep Network Training by Reducing
  Internal Covariate Shift
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
1.5K
45,665
0
11 Feb 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic OptimizationInternational Conference on Learning Representations (ICLR), 2014
Diederik P. Kingma
Jimmy Ba
ODL
4.7K
161,218
0
22 Dec 2014
1