A framework for Multi-A(rmed)/B(andit) testing with online FDR control

We propose a an alternative framework to existing setups for controlling false alarms when multiple A/B tests are run over time. This setup arises in many practical applications, e.g. when pharmaceutical companies test new treatment options against control pills for different diseases, or when internet companies test their default webpages versus various alternatives over time. Our framework proposes to replace a sequence of A/B tests by a sequence of best-arm MAB instances, where each instance corresponds to an adaptive test of a single hypothesis which can be continuously monitored by the data scientist and stopped at any time. To control for multiple testing, we demonstrate how to interleave the MAB tests with an online false discovery rate (FDR) algorithm so that we can obtain the best of both worlds: low sample complexity and any time online FDR control. Our main contributions are: (i) to propose reasonable definitions of a null hypothesis for MAB instances; (ii) to demonstrate how one can derive an always-valid sequential p-value that allows continuous monitoring of each MAB test; and (iii) to show that using the rejection thresholds of online-FDR algorithms as confidence levels for the MAB algorithms results in both sample-optimality, high power and low FDR at any point in time. We run extensive simulations to verify our claims, and also report results on real data collected from the New Yorker Cartoon Caption contest.
View on arXiv