340

Adversarial Dueling Bandits

International Conference on Machine Learning (ICML), 2020
Abstract

We introduce the problem of regret minimization in Adversarial Dueling Bandits. As in classic Dueling Bandits, the learner has to repeatedly choose a pair of items and observe only a relative binary `win-loss' feedback for this pair, but here this feedback is generated from an arbitrary preference matrix, possibly chosen adversarially. Our main result is an algorithm whose TT-round regret compared to the \emph{Borda-winner} from a set of KK items is O~(K1/3T2/3)\tilde{O}(K^{1/3}T^{2/3}), as well as a matching Ω(K1/3T2/3)\Omega(K^{1/3}T^{2/3}) lower bound. We also prove a similar high probability regret bound. We further consider a simpler \emph{fixed-gap} adversarial setup, which bridges between two extreme preference feedback models for dueling bandits: stationary preferences and an arbitrary sequence of preferences. For the fixed-gap adversarial setup we give an O~((K/Δ2)logT)\smash{ \tilde{O}((K/\Delta^2)\log{T}) } regret algorithm, where Δ\Delta is the gap in Borda scores between the best item and all other items, and show a lower bound of Ω(K/Δ2)\Omega(K/\Delta^2) indicating that our dependence on the main problem parameters KK and Δ\Delta is tight (up to logarithmic factors).

View on arXiv
Comments on this paper