ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.07817
17
1

Multi-Player Bandits Robust to Adversarial Collisions

15 November 2022
Shivakumar Mahesh
A. Rangi
Haifeng Xu
Long Tran-Thanh
    AAML
ArXiv (abs)PDFHTML
Abstract

Motivated by cognitive radios, stochastic Multi-Player Multi-Armed Bandits has been extensively studied in recent years. In this setting, each player pulls an arm, and receives a reward corresponding to the arm if there is no collision, namely the arm was selected by one single player. Otherwise, the player receives no reward if collision occurs. In this paper, we consider the presence of malicious players (or attackers) who obstruct the cooperative players (or defenders) from maximizing their rewards, by deliberately colliding with them. We provide the first decentralized and robust algorithm RESYNC for defenders whose performance deteriorates gracefully as O~(C)\tilde{O}(C)O~(C) as the number of collisions CCC from the attackers increases. We show that this algorithm is order-optimal by proving a lower bound which scales as Ω(C)\Omega(C)Ω(C). This algorithm is agnostic to the algorithm used by the attackers and agnostic to the number of collisions CCC faced from attackers.

View on arXiv
Comments on this paper