22
1

Multi-Player Approaches for Dueling Bandits

Abstract

Various approaches have emerged for multi-armed bandits in distributed systems. The multiplayer dueling bandit problem, common in scenarios with only preference-based information like human feedback, introduces challenges related to controlling collaborative exploration of non-informative arm pairs, but has received little attention. To fill this gap, we demonstrate that the direct use of a Follow Your Leader black-box approach matches the lower bound for this setting when utilizing known dueling bandit algorithms as a foundation. Additionally, we analyze a message-passing fully distributed approach with a novel Condorcet-winner recommendation protocol, resulting in expedited exploration in many cases. Our experimental comparisons reveal that our multiplayer algorithms surpass single-player benchmark algorithms, underscoring their efficacy in addressing the nuanced challenges of the multiplayer dueling bandit setting.

View on arXiv
@article{raveh2025_2405.16168,
  title={ Multi-Player Approaches for Dueling Bandits },
  author={ Or Raveh and Junya Honda and Masashi Sugiyama },
  journal={arXiv preprint arXiv:2405.16168},
  year={ 2025 }
}
Comments on this paper