5
0

SOReL and TOReL: Two Methods for Fully Offline Reinforcement Learning

Abstract

Sample efficiency remains a major obstacle for real world adoption of reinforcement learning (RL): success has been limited to settings where simulators provide access to essentially unlimited environment interactions, which in reality are typically costly or dangerous to obtain. Offline RL in principle offers a solution by exploiting offline data to learn a near-optimal policy before deployment. In practice, however, current offline RL methods rely on extensive online interactions for hyperparameter tuning, and have no reliable bound on their initial online performance. To address these two issues, we introduce two algorithms. Firstly, SOReL: an algorithm for safe offline reinforcement learning. Using only offline data, our Bayesian approach infers a posterior over environment dynamics to obtain a reliable estimate of the online performance via the posterior predictive uncertainty. Crucially, all hyperparameters are also tuned fully offline. Secondly, we introduce TOReL: a tuning for offline reinforcement learning algorithm that extends our information rate based offline hyperparameter tuning methods to general offline RL approaches. Our empirical evaluation confirms SOReL's ability to accurately estimate regret in the Bayesian setting whilst TOReL's offline hyperparameter tuning achieves competitive performance with the best online hyperparameter tuning methods using only offline data. Thus, SOReL and TOReL make a significant step towards safe and reliable offline RL, unlocking the potential for RL in the real world. Our implementations are publicly available:this https URL\_torel.

View on arXiv
@article{fellows2025_2505.22442,
  title={ SOReL and TOReL: Two Methods for Fully Offline Reinforcement Learning },
  author={ Mattie Fellows and Clarisse Wibault and Uljad Berdica and Johannes Forkel and Jakob N. Foerster and Michael A. Osborne },
  journal={arXiv preprint arXiv:2505.22442},
  year={ 2025 }
}
Comments on this paper