0

Constrained Feedback Learning for Non-Stationary Multi-Armed Bandits

Main:9 Pages
2 Figures
Bibliography:3 Pages
1 Tables
Appendix:14 Pages
Abstract

Non-stationary multi-armed bandits enable agents to adapt to changing environments by incorporating mechanisms to detect and respond to shifts in reward distributions, making them well-suited for dynamic settings. However, existing approaches typically assume that reward feedback is available at every round - an assumption that overlooks many real-world scenarios where feedback is limited. In this paper, we take a significant step forward by introducing a new model of constrained feedback in non-stationary multi-armed bandits, where the availability of reward feedback is restricted. We propose the first prior-free algorithm - that is, one that does not require prior knowledge of the degree of non-stationarity - that achieves near-optimal dynamic regret in this setting. Specifically, our algorithm attains a dynamic regret of O~(K1/3VT1/3T/B1/3)\tilde{\mathcal{O}}({K^{1/3} V_T^{1/3} T }/{ B^{1/3}}), where TT is the number of rounds, KK is the number of arms, BB is the query budget, and VTV_T is the variation budget capturing the degree of non-stationarity.

View on arXiv
Comments on this paper