131
v1v2 (latest)

An Improved Algorithm for Adversarial Linear Contextual Bandits via Reduction

Main:10 Pages
1 Figures
Bibliography:3 Pages
1 Tables
Appendix:6 Pages
Abstract

We present an efficient algorithm for linear contextual bandits with adversarial losses and stochastic action sets. Our approach reduces this setting to misspecification-robust adversarial linear bandits with fixed action sets. Without knowledge of the context distribution or access to a context simulator, the algorithm achieves O~(min{d2T,d3TlogK})\tilde{O}(\min\{d^2\sqrt{T}, \sqrt{d^3T\log K}\}) regret and runs in poly(d,C,T)\text{poly}(d,C,T) time, where dd is the feature dimension, CC is an upper bound on the number of linear constraints defining the action set in each round, KK is an upper bound on the number of actions in each round, and TT is number of rounds. This resolves the open question by Liu et al. (2023) on whether one can obtain poly(d)T\text{poly}(d)\sqrt{T} regret in polynomial time independent of the number of actions. For the important class of combinatorial bandits with adversarial losses and stochastic action sets where the action sets can be described by a polynomial number of linear constraints, our algorithm is the first to achieve poly(d)T\text{poly}(d)\sqrt{T} regret in polynomial time, while no prior algorithm achieves even o(T)o(T) regret in polynomial time to our knowledge. When a simulator is available, the regret bound can be improved to O~(dL)\tilde{O}(d\sqrt{L^\star}), where LL^\star is the cumulative loss of the best policy.

View on arXiv
Comments on this paper