18
39

Corruption-Robust Offline Reinforcement Learning

Abstract

We study the adversarial robustness in offline reinforcement learning. Given a batch dataset consisting of tuples (s,a,r,s)(s, a, r, s'), an adversary is allowed to arbitrarily modify ϵ\epsilon fraction of the tuples. From the corrupted dataset the learner aims to robustly identify a near-optimal policy. We first show that a worst-case Ω(dϵ)\Omega(d\epsilon) optimality gap is unavoidable in linear MDP of dimension dd, even if the adversary only corrupts the reward element in a tuple. This contrasts with dimension-free results in robust supervised learning and best-known lower-bound in the online RL setting with corruption. Next, we propose robust variants of the Least-Square Value Iteration (LSVI) algorithm utilizing robust supervised learning oracles, which achieve near-matching performances in cases both with and without full data coverage. The algorithm requires the knowledge of ϵ\epsilon to design the pessimism bonus in the no-coverage case. Surprisingly, in this case, the knowledge of ϵ\epsilon is necessary, as we show that being adaptive to unknown ϵ\epsilon is impossible.This again contrasts with recent results on corruption-robust online RL and implies that robust offline RL is a strictly harder problem.

View on arXiv
Comments on this paper