ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.08902
14
4

Adaptive Clustering and Personalization in Multi-Agent Stochastic Linear Bandits

15 June 2021
A. Ghosh
Abishek Sankararaman
K. Ramchandran
ArXivPDFHTML
Abstract

We consider the problem of minimizing regret in an NNN agent heterogeneous stochastic linear bandits framework, where the agents (users) are similar but not all identical. We model user heterogeneity using two popularly used ideas in practice; (i) A clustering framework where users are partitioned into groups with users in the same group being identical to each other, but different across groups, and (ii) a personalization framework where no two users are necessarily identical, but a user's parameters are close to that of the population average. In the clustered users' setup, we propose a novel algorithm, based on successive refinement of cluster identities and regret minimization. We show that, for any agent, the regret scales as O(T/N)\mathcal{O}(\sqrt{T/N})O(T/N​), if the agent is in a `well separated' cluster, or scales as O(T12+ε/(N)12−ε)\mathcal{O}(T^{\frac{1}{2} + \varepsilon}/(N)^{\frac{1}{2} -\varepsilon})O(T21​+ε/(N)21​−ε) if its cluster is not well separated, where ε\varepsilonε is positive and arbitrarily close to 000. Our algorithm is adaptive to the cluster separation, and is parameter free -- it does not need to know the number of clusters, separation and cluster size, yet the regret guarantee adapts to the inherent complexity. In the personalization framework, we introduce a natural algorithm where, the personal bandit instances are initialized with the estimates of the global average model. We show that, an agent iii whose parameter deviates from the population average by ϵi\epsilon_iϵi​, attains a regret scaling of O~(ϵiT)\widetilde{O}(\epsilon_i\sqrt{T})O(ϵi​T​). This demonstrates that if the user representations are close (small ϵi)\epsilon_i)ϵi​), the resulting regret is low, and vice-versa. The results are empirically validated and we observe superior performance of our adaptive algorithms over non-adaptive baselines.

View on arXiv
Comments on this paper