Noisy Laplace deconvolution with error in the operator

We adress the problem of Laplace deconvolution with random noise in a regression framework. The time set is not considered to be fixed, but grows with the number of observation points. Moreover, the convolution kernel is unknown, and accessible only through experimental noise. We make use of a recent procedure of estimation which couples a Galerkin projection of the operator on Laguerre functions, with a threshold performed both on the operator and the observed signal. We establish the minimax optimality of our procedure under the squared loss error, when the smoothness of the signal is measured in a Laguerre-Sobolev sense and the kernel satisfies fair blurring assumptions. It is important to stress that the resulting process is adaptive with regard both to the target function's smoothness and to the kernel's blurring properties. We end this paper with a numerical study emphazising the good practical performances of the procedure on concrete examples.
View on arXiv