ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1712.00578
20
59

Tracking the Best Expert in Non-stationary Stochastic Environments

2 December 2017
Chen-Yu Wei
Yi-Te Hong
Chi-Jen Lu
ArXivPDFHTML
Abstract

We study the dynamic regret of multi-armed bandit and experts problem in non-stationary stochastic environments. We introduce a new parameter Λ\LambdaΛ, which measures the total statistical variance of the loss distributions over TTT rounds of the process, and study how this amount affects the regret. We investigate the interaction between Λ\LambdaΛ and Γ\GammaΓ, which counts the number of times the distributions change, as well as Λ\LambdaΛ and VVV, which measures how far the distributions deviates over time. One striking result we find is that even when Γ\GammaΓ, VVV, and Λ\LambdaΛ are all restricted to constant, the regret lower bound in the bandit setting still grows with TTT. The other highlight is that in the full-information setting, a constant regret becomes achievable with constant Γ\GammaΓ and Λ\LambdaΛ, as it can be made independent of TTT, while with constant VVV and Λ\LambdaΛ, the regret still has a T1/3T^{1/3}T1/3 dependency. We not only propose algorithms with upper bound guarantee, but prove their matching lower bounds as well.

View on arXiv
Comments on this paper