16
7

Non-Stationary Dueling Bandits

Abstract

We study the non-stationary dueling bandits problem with KK arms, where the time horizon TT consists of MM stationary segments, each of which is associated with its own preference matrix. The learner repeatedly selects a pair of arms and observes a binary preference between them as feedback. To minimize the accumulated regret, the learner needs to pick the Condorcet winner of each stationary segment as often as possible, despite preference matrices and segment lengths being unknown. We propose the BeattheWinnerReset\mathrm{Beat\, the\, Winner\, Reset} algorithm and prove a bound on its expected binary weak regret in the stationary case, which tightens the bound of current state-of-art algorithms. We also show a regret bound for the non-stationary case, without requiring knowledge of MM or TT. We further propose and analyze two meta-algorithms, DETECT\mathrm{DETECT} for weak regret and MonitoredDuelingBandits\mathrm{Monitored\, Dueling\, Bandits} for strong regret, both based on a detection-window approach that can incorporate any dueling bandit algorithm as a black-box algorithm. Finally, we prove a worst-case lower bound for expected weak regret in the non-stationary case.

View on arXiv
Comments on this paper