ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.02955
16
1

Weighted Tallying Bandits: Overcoming Intractability via Repeated Exposure Optimality

4 May 2023
Dhruv Malik
Conor Igoe
Yuanzhi Li
Aarti Singh
    OffRL
ArXivPDFHTML
Abstract

In recommender system or crowdsourcing applications of online learning, a human's preferences or abilities are often a function of the algorithm's recent actions. Motivated by this, a significant line of work has formalized settings where an action's loss is a function of the number of times that action was recently played in the prior mmm timesteps, where mmm corresponds to a bound on human memory capacity. To more faithfully capture decay of human memory with time, we introduce the Weighted Tallying Bandit (WTB), which generalizes this setting by requiring that an action's loss is a function of a \emph{weighted} summation of the number of times that arm was played in the last mmm timesteps. This WTB setting is intractable without further assumption. So we study it under Repeated Exposure Optimality (REO), a condition motivated by the literature on human physiology, which requires the existence of an action that when repetitively played will eventually yield smaller loss than any other sequence of actions. We study the minimization of the complete policy regret (CPR), which is the strongest notion of regret, in WTB under REO. Since mmm is typically unknown, we assume we only have access to an upper bound MMM on mmm. We show that for problems with KKK actions and horizon TTT, a simple modification of the successive elimination algorithm has O(KT+(m+M)K)O \left( \sqrt{KT} + (m+M)K \right)O(KT​+(m+M)K) CPR. Interestingly, upto an additive (in lieu of mutliplicative) factor in (m+M)K(m+M)K(m+M)K, this recovers the classical guarantee for the simpler stochastic multi-armed bandit with traditional regret. We additionally show that in our setting, any algorithm will suffer additive CPR of Ω(mK+M)\Omega \left( mK + M \right)Ω(mK+M), demonstrating our result is nearly optimal. Our algorithm is computationally efficient, and we experimentally demonstrate its practicality and superiority over natural baselines.

View on arXiv
Comments on this paper