A Best-of-Both-Worlds Algorithm for Bandits with Delayed Feedback

We present a modified tuning of the algorithm of Zimmert and Seldin [2020] for adversarial multiarmed bandits with delayed feedback, which in addition to the minimax optimal adversarial regret guarantee shown by Zimmert and Seldin simultaneously achieves a near-optimal regret guarantee in the stochastic setting with fixed delays. Specifically, the adversarial regret guarantee is , where is the time horizon, is the number of arms, and is the fixed delay, whereas the stochastic regret guarantee is , where are the suboptimality gaps. We also present an extension of the algorithm to the case of arbitrary delays, which is based on an oracle knowledge of the maximal delay and achieves regret in the adversarial regime, where is the total delay, and regret in the stochastic regime, where is the maximal number of outstanding observations. Finally, we present a lower bound that matches regret upper bound achieved by the skipping technique of Zimmert and Seldin [2020] in the adversarial setting.
View on arXiv