Consistency of the posterior distribution in generalised linear inverse problems

For ill-posed inverse problems, a regularised solution can be interpreted as a mode of the posterior distribution in a Bayesian framework. This framework enriches the set the solutions, as other posterior estimates can be used as a solution to the inverse problem, such as the posterior mean. Bayesian formulation of an ill-posed inverse problem is also natural for scientists as it uses a priori information in a rigourous probabilistic framework, and the posterior distribution can be viewed as a set of possible solutions to the considered ill-posed inverse problem, with a weight characterising how well it is supported by the data and the prior information. In this paper we study properties of Bayesian solutions to ill-posed inverse problems, namely consistency and the rate of convergence in the Ky Fan metric. We consider the cases where the error distribution is not necessarily Gaussian, but belongs to a particular type of models we refer to as Generalised Linear Inverse Problems. This setting includes some models where the response depends on the unknown parameter nonlinearly. We also consider a particular case of the unknown parameter being on the boundary of the parameter set, and show that the rate of convergence in this case is faster than in the case the unknown parameter is an interior point.
View on arXiv