ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.05695
93
17
v1v2v3v4v5 (latest)

Randomized Exploration for Non-Stationary Stochastic Linear Bandits

11 December 2019
Baekjin Kim
Ambuj Tewari
ArXiv (abs)PDFHTML
Abstract

We investigate two perturbation approaches to overcome conservatism that optimism based algorithms chronically suffer from in practice. The first approach replaces optimism with a simple randomization when using confidence sets. The second one adds random perturbations to its current estimate before maximizing the expected reward. For non-stationary linear bandits, where each action is associated with a ddd-dimensional feature and the unknown parameter is time-varying with total variation BTB_TBT​, we propose two randomized algorithms, Discounted Randomized LinUCB (D-RandLinUCB) and Discounted Linear Thompson Sampling (D-LinTS) via the two perturbation approaches. We highlight the statistical optimality versus computational efficiency trade-off between them in that the former asymptotically achieves the optimal dynamic regret O~(d7/8BT1/4T3/4)\tilde{O}(d^{7/8} B_T^{1/4}T^{3/4})O~(d7/8BT1/4​T3/4), but the latter is oracle-efficient with an extra logarithmic factor in the number of arms compared to minimax-optimal dynamic regret. In a simulation study, both algorithms show outstanding performance in tackling conservatism issue that Discounted LinUCB struggles with.

View on arXiv
Comments on this paper