Bayesian Optimization from Human Feedback: Near-Optimal Regret Bounds

Bayesian optimization (BO) with preference-based feedback has recently garnered significant attention due to its emerging applications. We refer to this problem as Bayesian Optimization from Human Feedback (BOHF), which differs from conventional BO by learning the best actions from a reduced feedback model, where only the preference between two actions is revealed to the learner at each time step. The objective is to identify the best action using a limited number of preference queries, typically obtained through costly human feedback. Existing work, which adopts the Bradley-Terry-Luce (BTL) feedback model, provides regret bounds for the performance of several algorithms. In this work, within the same framework we develop tighter performance guarantees. Specifically, we derive regret bounds of , where represents the maximum information gaina kernel-specific complexity termand is the number of queries. Our results significantly improve upon existing bounds. Notably, for common kernels, we show that the order-optimal sample complexities of conventional BOachieved with richer feedback modelsare recovered. In other words, the same number of preferential samples as scalar-valued samples is sufficient to find a nearly optimal solution.
View on arXiv@article{kayal2025_2505.23673, title={ Bayesian Optimization from Human Feedback: Near-Optimal Regret Bounds }, author={ Aya Kayal and Sattar Vakili and Laura Toni and Da-shan Shiu and Alberto Bernacchia }, journal={arXiv preprint arXiv:2505.23673}, year={ 2025 } }