ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.05142
72
10
v1v2v3 (latest)

Incentivized Exploration for Multi-Armed Bandits under Reward Drift

12 November 2019
Zhiyuan Liu
Huazheng Wang
Fan Shen
Kai-Chun Liu
Lijun Chen
ArXiv (abs)PDFHTML
Abstract

We study incentivized exploration for the multi-armed bandit (MAB) problem where the players receive compensation for exploring arms other than the greedy choice and may provide biased feedback on reward. We seek to understand the impact of this drifted reward feedback by analyzing the performance of three instantiations of the incentivized MAB algorithm: UCB, ε\varepsilonε-Greedy, and Thompson Sampling. Our results show that they all achieve O(log⁡T)\mathcal{O}(\log T)O(logT) regret and compensation under the drifted reward, and are therefore effective in incentivizing exploration. Numerical examples are provided to complement the theoretical analysis.

View on arXiv
Comments on this paper