We study a setting in which a learner faces a sequence of decision tasks and is required to make good decisions as quickly as possible. Each task is associated with a pair , where is a random variable and is its (unknown and potentially negative) expectation. The learner can draw arbitrarily many i.i.d. samples of but its expectation is never revealed. After some sampling is done, the learner can decide to stop and either accept the task, gaining as a reward, or reject it, getting zero reward instead. A distinguishing feature of our model is that the learner's performance is measured as the expected cumulative reward divided by the expected cumulative number of drawn samples. The learner's goal is to converge to the per-sample reward of the optimal policy within a fixed class. We design an online algorithm with data-dependent theoretical guarantees for finite sets of policies, and analyze its extension to infinite classes of policies. A key technical aspect of this setting, which sets it aside from stochastic bandits, is the impossibility of obtaining unbiased estimates of the policy's performance objective.
View on arXiv