32
0

Quick-Draw Bandits: Quickly Optimizing in Nonstationary Environments with Extremely Many Arms

Main:9 Pages
13 Figures
Bibliography:2 Pages
1 Tables
Appendix:1 Pages
Abstract

Canonical algorithms for multi-armed bandits typically assume a stationary reward environment where the size of the action space (number of arms) is small. More recently developed methods typically relax only one of these assumptions: existing non-stationary bandit policies are designed for a small number of arms, while Lipschitz, linear, and Gaussian process bandit policies are designed to handle a large (or infinite) number of arms in stationary reward environments under constraints on the reward function. In this manuscript, we propose a novel policy to learn reward environments over a continuous space using Gaussian interpolation. We show that our method efficiently learns continuous Lipschitz reward functions with O(T)\mathcal{O}^*(\sqrt{T}) cumulative regret. Furthermore, our method naturally extends to non-stationary problems with a simple modification. We finally demonstrate that our method is computationally favorable (100-10000x faster) and experimentally outperforms sliding Gaussian process policies on datasets with non-stationarity and an extremely large number of arms.

View on arXiv
@article{everett2025_2505.24692,
  title={ Quick-Draw Bandits: Quickly Optimizing in Nonstationary Environments with Extremely Many Arms },
  author={ Derek Everett and Fred Lu and Edward Raff and Fernando Camacho and James Holt },
  journal={arXiv preprint arXiv:2505.24692},
  year={ 2025 }
}
Comments on this paper