510

Fast and Accurate Repeated Decision Making

Abstract

We study a setting in which a learner faces a sequence of decision tasks and is required to make good decisions as quickly as possible. Each task nn is associated with a pair (Xn,μn)(X_n,\mu_n), where XnX_n is a random variable and μn\mu_n is its (unknown and potentially negative) expectation. The learner can draw arbitrarily many i.i.d. samples of XnX_n but its expectation μn\mu_n is never revealed. After some sampling is done, the learner can decide to stop and either accept the task, gaining μn\mu_n as a reward, or reject it, getting zero reward instead. A distinguishing feature of our model is that the learner's performance is measured as the expected cumulative reward divided by the expected cumulative number of drawn samples. The learner's goal is to converge to the per-sample reward of the optimal policy within a fixed class. We design an online algorithm with data-dependent theoretical guarantees for finite sets of policies, and analyze its extension to infinite classes of policies. A key technical aspect of this setting, which sets it aside from stochastic bandits, is the impossibility of obtaining unbiased estimates of the policy's performance objective.

View on arXiv
Comments on this paper