ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.00090
25
99

Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization

31 May 2017
Jonathan Scarlett
Ilija Bogunovic
V. Cevher
ArXivPDFHTML
Abstract

In this paper, we consider the problem of sequentially optimizing a black-box function fff based on noisy samples and bandit feedback. We assume that fff is smooth in the sense of having a bounded norm in some reproducing kernel Hilbert space (RKHS), yielding a commonly-considered non-Bayesian form of Gaussian process bandit optimization. We provide algorithm-independent lower bounds on the simple regret, measuring the suboptimality of a single point reported after TTT rounds, and on the cumulative regret, measuring the sum of regrets over the TTT chosen points. For the isotropic squared-exponential kernel in ddd dimensions, we find that an average simple regret of ϵ\epsilonϵ requires T=Ω(1ϵ2(log⁡1ϵ)d/2)T = \Omega\big(\frac{1}{\epsilon^2} (\log\frac{1}{\epsilon})^{d/2}\big)T=Ω(ϵ21​(logϵ1​)d/2), and the average cumulative regret is at least Ω(T(log⁡T)d/2)\Omega\big( \sqrt{T(\log T)^{d/2}} \big)Ω(T(logT)d/2​), thus matching existing upper bounds up to the replacement of d/2d/2d/2 by 2d+O(1)2d+O(1)2d+O(1) in both cases. For the Mat\érn-ν\nuν kernel, we give analogous bounds of the form Ω((1ϵ)2+d/ν)\Omega\big( (\frac{1}{\epsilon})^{2+d/\nu}\big)Ω((ϵ1​)2+d/ν) and Ω(Tν+d2ν+d)\Omega\big( T^{\frac{\nu + d}{2\nu + d}} \big)Ω(T2ν+dν+d​), and discuss the resulting gaps to the existing upper bounds.

View on arXiv
Comments on this paper