22
27

Tight Regret Bounds for Bayesian Optimization in One Dimension

Abstract

We consider the problem of Bayesian optimization (BO) in one dimension, under a Gaussian process prior and Gaussian sampling noise. We provide a theoretical analysis showing that, under fairly mild technical assumptions on the kernel, the best possible cumulative regret up to time TT behaves as Ω(T)\Omega(\sqrt{T}) and O(TlogT)O(\sqrt{T\log T}). This gives a tight characterization up to a logT\sqrt{\log T} factor, and includes the first non-trivial lower bound for noisy BO. Our assumptions are satisfied, for example, by the squared exponential and Matérn-ν\nu kernels, with the latter requiring ν>2\nu > 2. Our results certify the near-optimality of existing bounds (Srinivas {\em et al.}, 2009) for the SE kernel, while proving them to be strictly suboptimal for the Matérn kernel with ν>2\nu > 2.

View on arXiv
@article{scarlett2025_1805.11792,
  title={ Tight Regret Bounds for Bayesian Optimization in One Dimension },
  author={ Jonathan Scarlett },
  journal={arXiv preprint arXiv:1805.11792},
  year={ 2025 }
}
Comments on this paper