116
v1v2 (latest)

Efficient Best-of-Both-Worlds Algorithms for Contextual Combinatorial Semi-Bandits

Main:11 Pages
1 Figures
Bibliography:3 Pages
Appendix:9 Pages
Abstract

We introduce the first best-of-both-worlds algorithm for contextual combinatorial semi-bandits that simultaneously guarantees O~(T)\widetilde{\mathcal{O}}(\sqrt{T}) regret in the adversarial regime and O~(lnT)\widetilde{\mathcal{O}}(\ln T) regret in the corrupted stochastic regime. Our approach builds on the Follow-the-Regularized-Leader (FTRL) framework equipped with a Shannon entropy regularizer, yielding a flexible method that admits efficient implementations. Beyond regret bounds, we tackle the practical bottleneck in FTRL (or, equivalently, Online Stochastic Mirror Descent) arising from the high-dimensional projection step encountered in each round of interaction. By leveraging the Karush-Kuhn-Tucker conditions, we transform the KK-dimensional convex projection problem into a single-variable root-finding problem, dramatically accelerating each round. Empirical evaluations demonstrate that this combined strategy not only attains the attractive regret bounds of best-of-both-worlds algorithms but also delivers substantial per-round speed-ups, making it well-suited for large-scale, real-time applications.

View on arXiv
Comments on this paper