A-optimal design of experiments for infinite-dimensional Bayesian linear
inverse problems with regularized -sparsification
We present a scalable method for computing A-optimal designs for infinite-dimensional Bayesian linear inverse problems governed by time-dependent partial differential equations (PDEs). Our target application is the problem of optimal allocation of sensors where observational data is collected. Computing optimal designs is particularly challenging for inverse problems governed by computationally expensive PDE models with infinite-dimensional (or, after discretization, high-dimensional) parameters. To alleviate the computational cost, we exploit the problem structure and build a low-rank approximation of the parameter-to-observable map, preconditioned with the square root of the prior covariance operator. The availability of this low-rank surrogate relieves our method from expensive PDE solves when evaluating the optimal design objective function and its derivatives. We control the sparsity of the design by employing a sequence of penalty functions that successively approximate the -``norm''; this results in binary 0-1 designs that characterize optimal sensor locations. We present numerical results for the inference of the initial condition from spatial and temporal observations in a time-dependent advection-diffusion problem in two and three space dimensions. We find that an optimal design can be computed at a cost, measured in forward PDE solves, that is independent of the parameter dimension, and depends only weakly on the discretization of the sensor domain. Moreover, the numerical optimization problem for finding the optimal design can be solved in a number of quasi-Newton interior-point iterations that is insensitive to the parameter dimension and the size of the design vector. In a numerical example we demonstrate that -sparsified designs obtained via a continuation method outperform -sparsified designs.
View on arXiv