ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.14622
  4. Cited By
Heterogeneous Multi-player Multi-armed Bandits: Closing the Gap and
  Generalization
v1v2 (latest)

Heterogeneous Multi-player Multi-armed Bandits: Closing the Gap and Generalization

27 October 2021
Chengshuai Shi
Wei Xiong
Cong Shen
Jing Yang
ArXiv (abs)PDFHTML

Papers citing "Heterogeneous Multi-player Multi-armed Bandits: Closing the Gap and Generalization"

15 / 15 papers shown
Title
Combining Diverse Information for Coordinated Action: Stochastic Bandit
  Algorithms for Heterogeneous Agents
Combining Diverse Information for Coordinated Action: Stochastic Bandit Algorithms for Heterogeneous Agents
Lucia Gordon
Esther Rolf
Milind Tambe
45
1
0
06 Aug 2024
PPA-Game: Characterizing and Learning Competitive Dynamics Among Online
  Content Creators
PPA-Game: Characterizing and Learning Competitive Dynamics Among Online Content Creators
Renzhe Xu
Haotian Wang
Xingxuan Zhang
Yue Liu
Peng Cui
69
3
0
22 Mar 2024
Improved Bandits in Many-to-one Matching Markets with Incentive
  Compatibility
Improved Bandits in Many-to-one Matching Markets with Incentive Compatibility
Fang-yuan Kong
Shuai Li
12
7
0
03 Jan 2024
Harnessing the Power of Federated Learning in Federated Contextual
  Bandits
Harnessing the Power of Federated Learning in Federated Contextual Bandits
Chengshuai Shi
Ruida Zhou
Kun Yang
Cong Shen
FedML
60
0
0
26 Dec 2023
Finite-Time Frequentist Regret Bounds of Multi-Agent Thompson Sampling
  on Sparse Hypergraphs
Finite-Time Frequentist Regret Bounds of Multi-Agent Thompson Sampling on Sparse Hypergraphs
Tianyuan Jin
Hao-Lun Hsu
William Chang
Pan Xu
50
2
0
24 Dec 2023
Multi-Agent Bandit Learning through Heterogeneous Action Erasure
  Channels
Multi-Agent Bandit Learning through Heterogeneous Action Erasure Channels
Osama A. Hanna
Merve Karakas
Lin Yang
Christina Fragouli
69
1
0
21 Dec 2023
Adversarial Attacks on Cooperative Multi-agent Bandits
Adversarial Attacks on Cooperative Multi-agent Bandits
Jinhang Zuo
Zhiyao Zhang
Xuchuang Wang
Cheng Chen
Shuai Li
J. C. Lui
Mohammad Hajiesmaili
Adam Wierman
AAML
42
1
0
03 Nov 2023
Cooperative Multi-agent Bandits: Distributed Algorithms with Optimal
  Individual Regret and Constant Communication Costs
Cooperative Multi-agent Bandits: Distributed Algorithms with Optimal Individual Regret and Constant Communication Costs
L. Yang
Xuchuang Wang
Mohammad Hajiesmaili
Lijun Zhang
John C. S. Lui
Don Towsley
67
5
0
08 Aug 2023
Constant or logarithmic regret in asynchronous multiplayer bandits
Constant or logarithmic regret in asynchronous multiplayer bandits
Hugo Richard
Etienne Boursier
Vianney Perchet
61
1
0
31 May 2023
Competing for Shareable Arms in Multi-Player Multi-Armed Bandits
Competing for Shareable Arms in Multi-Player Multi-Armed Bandits
Renzhe Xu
Hongya Wang
Xingxuan Zhang
Yangqiu Song
Peng Cui
40
6
0
30 May 2023
On-Demand Communication for Asynchronous Multi-Agent Bandits
On-Demand Communication for Asynchronous Multi-Agent Bandits
Y. Chen
L. Yang
Xuchuang Wang
Xutong Liu
Mohammad Hajiesmaili
John C. S. Lui
Don Towsley
89
12
0
15 Feb 2023
Decentralized Stochastic Multi-Player Multi-Armed Walking Bandits
Decentralized Stochastic Multi-Player Multi-Armed Walking Bandits
Guojun Xiong
Jiaqiang Li
79
1
0
12 Dec 2022
A survey on multi-player bandits
A survey on multi-player bandits
Etienne Boursier
Vianney Perchet
89
16
0
29 Nov 2022
Multi-Player Bandits Robust to Adversarial Collisions
Multi-Player Bandits Robust to Adversarial Collisions
Shivakumar Mahesh
A. Rangi
Haifeng Xu
Long Tran-Thanh
AAML
17
1
0
15 Nov 2022
Multi-Player Multi-Armed Bandits with Finite Shareable Resources Arms:
  Learning Algorithms & Applications
Multi-Player Multi-Armed Bandits with Finite Shareable Resources Arms: Learning Algorithms & Applications
Xuchuang Wang
Hong Xie
John C. S. Lui
27
8
0
28 Apr 2022
1