ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.01980
100
51

Linear Bandits with Limited Adaptivity and Learning Distributional Optimal Design

4 July 2020
Yufei Ruan
Jiaqi Yang
Yuanshuo Zhou
    OffRL
ArXivPDFHTML
Abstract

Motivated by practical needs such as large-scale learning, we study the impact of adaptivity constraints to linear contextual bandits, a central problem in online active learning. We consider two popular limited adaptivity models in literature: batch learning and rare policy switches. We show that, when the context vectors are adversarially chosen in ddd-dimensional linear contextual bandits, the learner needs O(dlog⁡dlog⁡T)O(d \log d \log T)O(dlogdlogT) policy switches to achieve the minimax-optimal regret, and this is optimal up to poly(log⁡d,log⁡log⁡T)\mathrm{poly}(\log d, \log \log T)poly(logd,loglogT) factors; for stochastic context vectors, even in the more restricted batch learning model, only O(log⁡log⁡T)O(\log \log T)O(loglogT) batches are needed to achieve the optimal regret. Together with the known results in literature, our results present a complete picture about the adaptivity constraints in linear contextual bandits. Along the way, we propose the distributional optimal design, a natural extension of the optimal experiment design, and provide a both statistically and computationally efficient learning algorithm for the problem, which may be of independent interest.

View on arXiv
Comments on this paper