ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.16692
23
0

Lower Bounds for Time-Varying Kernelized Bandits

22 October 2024
Xu Cai
Jonathan Scarlett
ArXivPDFHTML
Abstract

The optimization of black-box functions with noisy observations is a fundamental problem with widespread applications, and has been widely studied under the assumption that the function lies in a reproducing kernel Hilbert space (RKHS). This problem has been studied extensively in the stationary setting, and near-optimal regret bounds are known via developments in both upper and lower bounds. In this paper, we consider non-stationary scenarios, which are crucial for certain applications but are currently less well-understood. Specifically, we provide the first algorithm-independent lower bounds, where the time variations are subject satisfying a total variation budget according to some function norm. Under ℓ∞\ell_{\infty}ℓ∞​-norm variations, our bounds are found to be close to an existing upper bound (Hong et al., 2023). Under RKHS norm variations, the upper and lower bounds are still reasonably close but with more of a gap, raising the interesting open question of whether non-minor improvements in the upper bound are possible.

View on arXiv
@article{cai2025_2410.16692,
  title={ Lower Bounds for Time-Varying Kernelized Bandits },
  author={ Xu Cai and Jonathan Scarlett },
  journal={arXiv preprint arXiv:2410.16692},
  year={ 2025 }
}
Comments on this paper