Corruption-Robust Offline Reinforcement Learning

We study the adversarial robustness in offline reinforcement learning. Given a batch dataset consisting of tuples , an adversary is allowed to arbitrarily modify fraction of the tuples. From the corrupted dataset the learner aims to robustly identify a near-optimal policy. We first show that a worst-case optimality gap is unavoidable in linear MDP of dimension , even if the adversary only corrupts the reward element in a tuple. This contrasts with dimension-free results in robust supervised learning and best-known lower-bound in the online RL setting with corruption. Next, we propose robust variants of the Least-Square Value Iteration (LSVI) algorithm utilizing robust supervised learning oracles, which achieve near-matching performances in cases both with and without full data coverage. The algorithm requires the knowledge of to design the pessimism bonus in the no-coverage case. Surprisingly, in this case, the knowledge of is necessary, as we show that being adaptive to unknown is impossible.This again contrasts with recent results on corruption-robust online RL and implies that robust offline RL is a strictly harder problem.
View on arXiv