97
v1v2v3 (latest)

Stochastic Control Methods for Optimization

Jinniao Qiu
Main:31 Pages
7 Figures
Bibliography:3 Pages
Abstract

In this work, we investigate a stochastic control framework for global optimization over both Euclidean spaces and the Wasserstein space of probability measures, where the objective function may be non-convex and/or non-differentiable. In the Euclidean setting, the original minimization problem is approximated by a family of regularized stochastic control problems; using dynamic programming, we analyze the associated Hamilton-Jacobi-Bellman equations and obtain tractable representations via the Cole-Hopf transformation and the Feynman-Kac formula. For optimization over probability measures, we formulate a regularized mean-field control problem characterized by a master equation, and further approximate it by controlled NN-particle systems. We establish that, as the regularization parameter tends to zero (and as the particle number tends to infinity for the optimization over probability measures), the value of the control problem converges to the global minimum of the original objective. Building on the resulting probabilistic representations, we propose the Monte Carlo-based numerical schemes that are derivative-free due to the utilization of the Bismut-Elworthy-Li formula and numerical experiments are reported to illustrate the effectiveness of the methods and to support the theoretical convergence rates.

View on arXiv
Comments on this paper