Bandit Allocational Instability
When multi-armed bandit (MAB) algorithms allocate pulls among competing arms, the resulting allocation can exhibit huge variation. This is particularly harmful in modern applications such as learning-enhanced platform operations and post-bandit statistical inference. Thus motivated, we introduce a new performance metric of MAB algorithms termed allocation variability, which is the largest (over arms) standard deviation of an arm's number of pulls. We establish a fundamental trade-off between allocation variability and regret, the canonical performance metric of reward maximization. In particular, for any algorithm, the worst-case regret and worst-case allocation variability must satisfy as , as long as . This indicates that any minimax regret-optimal algorithm must incur worst-case allocation variability , the largest possible scale; while any algorithm with sublinear worst-case regret must necessarily incur . We further show that this lower bound is essentially tight, and that any point on the Pareto frontier can be achieved by a simple tunable algorithm UCB-f, a generalization of the classic UCB1. Finally, we discuss implications for platform operations and for statistical inference, when bandit algorithms are used. As a byproduct of our result, we resolve an open question of Praharaj and Khamaru (2025).
View on arXiv