Gradient-based Sample Selection for Faster Bayesian Optimization

Bayesian optimization (BO) is an effective technique for black-box optimization. However, its applicability is typically limited to moderate-budget problems due to the cubic complexity in computing the Gaussian process (GP) surrogate model. In large-budget scenarios, directly employing the standard GP model faces significant challenges in computational time and resource requirements. In this paper, we propose a novel approach, gradient-based sample selection Bayesian Optimization (GSSBO), to enhance the computational efficiency of BO. The GP model is constructed on a selected set of samples instead of the whole dataset. These samples are selected by leveraging gradient information to maintain diversity and representation. We provide a theoretical analysis of the gradient-based sample selection strategy and obtain explicit sublinear regret bounds for our proposed framework. Extensive experiments on synthetic and real-world tasks demonstrate that our approach significantly reduces the computational cost of GP fitting in BO while maintaining optimization performance comparable to baseline methods.
View on arXiv@article{wei2025_2504.07742, title={ Gradient-based Sample Selection for Faster Bayesian Optimization }, author={ Qiyu Wei and Haowei Wang and Zirui Cao and Songhao Wang and Richard Allmendinger and Mauricio A Álvarez }, journal={arXiv preprint arXiv:2504.07742}, year={ 2025 } }