ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.11051
10
68

πππBO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization

23 April 2022
Carl Hvarfner
Daniel Stoll
Artur L. F. Souza
Marius Lindauer
Frank Hutter
Luigi Nardi
ArXivPDFHTML
Abstract

Bayesian optimization (BO) has become an established framework and popular tool for hyperparameter optimization (HPO) of machine learning (ML) algorithms. While known for its sample-efficiency, vanilla BO can not utilize readily available prior beliefs the practitioner has on the potential location of the optimum. Thus, BO disregards a valuable source of information, reducing its appeal to ML practitioners. To address this issue, we propose π\piπBO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user. In contrast to previous approaches, π\piπBO is conceptually simple and can easily be integrated with existing libraries and many acquisition functions. We provide regret bounds when π\piπBO is applied to the common Expected Improvement acquisition function and prove convergence at regular rates independently of the prior. Further, our experiments show that π\piπBO outperforms competing approaches across a wide suite of benchmarks and prior characteristics. We also demonstrate that π\piπBO improves on the state-of-the-art performance for a popular deep learning task, with a 12.5 ×\times× time-to-accuracy speedup over prominent BO approaches.

View on arXiv
Comments on this paper