630
v1v2v3 (latest)

Batch Value-function Approximation with Only Realizability

International Conference on Machine Learning (ICML), 2020
Abstract

We make progress in a long-standing problem of batch reinforcement learning (RL): learning QQ^\star from an exploratory and polynomial-sized dataset, using a realizable and otherwise arbitrary function class. In fact, all existing algorithms demand function-approximation assumptions stronger than realizability, and the mounting negative evidence has led to a conjecture that sample-efficient learning is impossible in this setting (Chen and Jiang, 2019). Our algorithm, BVFT, breaks the hardness conjecture (albeit under a stronger notion of exploratory data) via a tournament procedure that reduces the learning problem to pairwise comparison, and solves the latter with the help of a state-action partition constructed from the compared functions. We also discuss how BVFT can be applied to model selection among other extensions and open problems.

View on arXiv
Comments on this paper