ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.00610
22
0

Diffusion-BBO: Diffusion-Based Inverse Modeling for Online Black-Box Optimization

30 June 2024
D. Wu
Nikki Lijing Kuang
Ruijia Niu
Yi-An Ma
Rose Yu
ArXivPDFHTML
Abstract

Online black-box optimization (BBO) aims to optimize an objective function by iteratively querying a black-box oracle in a sample-efficient way. While prior studies focus on forward approaches such as Gaussian Processes (GPs) to learn a surrogate model for the unknown objective function, they struggle with steering clear of out-of-distribution and invalid designs in scientific discovery tasks. Recently, inverse modeling approaches that map the objective space to the design space with conditional diffusion models have demonstrated impressive capability in learning the data manifold. However, these approaches proceed in an offline fashion with pre-collected data. How to design inverse approaches for online BBO to actively query new data and improve the sample efficiency remains an open question. In this work, we propose Diffusion-BBO, a sample-efficient online BBO framework leveraging the conditional diffusion model as the inverse surrogate model. Diffusion-BBO employs a novel acquisition function Uncertainty-aware Exploration (UaE) to propose scores in the objective space for conditional sampling. We theoretically prove that Diffusion-BBO with UaE achieves a near-optimal solution for online BBO. We also empirically demonstrate that Diffusion-BBO with UaE outperforms existing online BBO baselines across 6 scientific discovery tasks.

View on arXiv
@article{wu2025_2407.00610,
  title={ Diffusion-BBO: Diffusion-Based Inverse Modeling for Online Black-Box Optimization },
  author={ Dongxia Wu and Nikki Lijing Kuang and Ruijia Niu and Yi-An Ma and Rose Yu },
  journal={arXiv preprint arXiv:2407.00610},
  year={ 2025 }
}
Comments on this paper