Bayesian Optimization for Non-Convex Two-Stage Stochastic Optimization Problems

Bayesian optimization is a sample-efficient method for solving expensive, black-box optimization problems. Stochastic programming concerns optimization under uncertainty where, typically, average performance is the quantity of interest. In the first stage of a two-stage problem, here-and-now decisions must be made in the face of uncertainty, while in the second stage, wait-and-see decisions are made after the uncertainty has been resolved. Many methods in stochastic programming assume that the objective is cheap to evaluate and linear or convex. We apply Bayesian optimization to solve non-convex, two-stage stochastic programs which are black-box and expensive to evaluate as, for example, is often the case with simulation objectives. We formulate a knowledge-gradient-based acquisition function to jointly optimize the first- and second-stage variables, establish a guarantee of asymptotic consistency, and provide a computationally efficient approximation. We demonstrate comparable empirical results to an alternative we formulate with fewer approximations, which alternates its focus between the two variable types, and superior empirical results over the state of the art and the standard, naïve, two-step benchmark.
View on arXiv@article{buckingham2025_2408.17387, title={ Bayesian Optimization for Non-Convex Two-Stage Stochastic Optimization Problems }, author={ Jack M. Buckingham and Ivo Couckuyt and Juergen Branke }, journal={arXiv preprint arXiv:2408.17387}, year={ 2025 } }