85
4

Repeated A/B Testing

Abstract

We study a setting in which a learner faces a sequence of A/B tests and has to make as many good decisions as possible within a given amount of time. Each A/B test nn is associated with an unknown (and potentially negative) reward μn[1,1]\mu_n \in [-1,1], drawn i.i.d. from an unknown and fixed distribution. For each A/B test nn, the learner sequentially draws i.i.d. samples of a {1,1}\{-1,1\}-valued random variable with mean μn\mu_n until a halting criterion is met. The learner then decides to either accept the reward μn\mu_n or to reject it and get zero instead. We measure the learner's performance as the sum of the expected rewards of the accepted μn\mu_n divided by the total expected number of used time steps (which is different from the expected ratio between the total reward and the total number of used time steps). We design an algorithm and prove a data-dependent regret bound against any set of policies based on an arbitrary halting criterion and decision rule. Though our algorithm borrows ideas from multiarmed bandits, the two settings are significantly different and not directly comparable. In fact, the value of μn\mu_n is never observed directly in our setting---unlike rewards in stochastic bandits. Moreover, the particular structure of our problem allows our regret bounds to be independent of the number of policies.

View on arXiv
Comments on this paper