Optimal Importance Sampling via Stochastic Optimal Control for
Stochastic Reaction Networks
We explore the efficient estimation of statistical quantities, particularly rare event probabilities, for stochastic reaction networks. To this end, we propose a novel importance sampling (IS) approach to improve the efficiency of Monte Carlo (MC) estimators when based on an approximate tau-leap scheme. The crucial step in IS is choosing an appropriate change of measure for achieving substantial variance reduction. Based on an original connection between finding the optimal IS parameters within a class of probability measures and a stochastic optimal control (SOC) formulation, we propose an automated approach to obtain an efficient path-dependent measure change. The optimal IS parameters are obtained by solving a variance minimization problem. We derive an associated backward equation solved by these optimal parameters. Given the challenge of analytically solving this backward equation, we propose a numerical dynamic programming algorithm to approximate the optimal control parameters. In the one-dimensional case, our numerical results show that the variance of our proposed estimator decays at a rate of for a step size of , compared to for a standard MC estimator. For a given prescribed error tolerance, , this implies an improvement in the computational complexity to become instead of when using a standard MC estimator. To mitigate the curse of dimensionality issue caused by solving the backward equation in the multi-dimensional case, we propose an alternative learning-based method that approximates the value function using a neural network, the parameters of which are determined via a stochastic optimization algorithm. Our numerical experiments demonstrate that our learning-based IS approach substantially reduces the variance of the MC estimator.
View on arXiv