Posterior Consistency for Bayesian (elliptic) Inverse Problems through
Stability and Regression Results
We investigate the Bayesian approach to nonlinear inverse problems by analysing it from a frequentist's perspective. We show that the posterior concentrates around the truth if the noise disappears or the amount of data goes to infinity. We consider the inverse problem of reconstructing the diffusion coefficient from noisy observations of the solution to an elliptic PDE in divergence form. This example is used to illustrate a link between stability results for deterministic nonlinear inverse problems and posterior consistency for Bayesian nonparametric regression. We obtain posterior consistency under weak assumptions on the prior. We obtain an algebraic rate for provided there are appropriate asymptotic lower bound for small ball probabilities of the prior. We establish them for a popular class of priors for the elliptic inverse problem. For this reason we prove posterior consistency for Bayesian nonparametric regression under weak assumptions on the prior and Gaussian observational noise. Our main contribution here is that our results holds under weak assumptions on the prior. In particular Gaussian priors satisfy our assumptions. For a particular class of Gaussian prior and noise studied in literature our rates are close to the optimal minimax rate. An insightful example of posterior inconsistency is provided for the regression problem with pointwise observations.
View on arXiv