Jump estimation in inverse regression

We consider estimation of a step function from noisy observations of a deconvolution , where is some bounded -function. We use a penalized least squares estimator to reconstruct the signal from the observations, with penalty equal to the number of jumps of the reconstruction. Asymptotically, it is possible to correctly estimate the number of jumps with probability one. Given that the number of jumps is correctly estimated, we show that the corresponding parameter estimates of the jump locations and jump heights are consistent and converge to a joint normal distribution with covariance structure depending on , and that this rate is minimax for bounded continuous kernels . As special case we obtain the asymptotic distribution of the least squares estimator in multiphase regression and generalisations thereof. In contrast to the results obtained for bounded , we show that for kernels with a singularity of order , a jump location can be estimated at a rate of , which is again the minimax rate. We find that these rate do not depend on the spectral information of the operator rather on its localization properties in the time domain. Finally, it turns out that adaptive sampling does not improve the rate of convergence, in strict contrast to the case of direct regression.
View on arXiv