ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.02729
10
12

Linear Bandit Algorithms with Sublinear Time Complexity

3 March 2021
Shuo Yang
Tongzheng Ren
Sanjay Shakkottai
Eric Price
Inderjit S. Dhillon
Sujay Sanghavi
ArXivPDFHTML
Abstract

We propose two linear bandits algorithms with per-step complexity sublinear in the number of arms KKK. The algorithms are designed for applications where the arm set is extremely large and slowly changing. Our key realization is that choosing an arm reduces to a maximum inner product search (MIPS) problem, which can be solved approximately without breaking regret guarantees. Existing approximate MIPS solvers run in sublinear time. We extend those solvers and present theoretical guarantees for online learning problems, where adaptivity (i.e., a later step depends on the feedback in previous steps) becomes a unique challenge. We then explicitly characterize the tradeoff between the per-step complexity and regret. For sufficiently large KKK, our algorithms have sublinear per-step complexity and O~(T)\tilde O(\sqrt{T})O~(T​) regret. Empirically, we evaluate our proposed algorithms in a synthetic environment and a real-world online movie recommendation problem. Our proposed algorithms can deliver a more than 72 times speedup compared to the linear time baselines while retaining similar regret.

View on arXiv
Comments on this paper