291
64

Divide and Conquer in Non-standard Problems and the Super-efficiency Phenomenon

Abstract

We study how the divide and conquer principle --- partition the available data into subsamples, compute an estimate from each subsample and combine these appropriately to form the final estimator --- works in non-standard problems where rates of convergence are typically slower than n\sqrt{n} and limit distributions are non-Gaussian, with a special emphasis on the least squares estimator of a monotone regression function. We find that the pooled estimator, obtained by averaging non-standard estimates across the mutually exclusive subsamples, outperforms the non-standard estimator based on the entire sample in the sense of pointwise inference. We also show that, under appropriate conditions, if the number of subsamples is allowed to increase at appropriate rates, the pooled estimator is asymptotically normally distributed with a variance that is empirically estimable from the subsample-level estimates. Further, in the context of monotone function estimation we show that this gain in pointwise efficiency comes at a price --- the pooled estimator's performance, in a uniform sense (maximal risk) over a class of models worsens as the number of subsamples increases, leading to a version of the super-efficiency phenomenon. In the process, we develop analytical results for the order of the bias in isotonic regression for the first time in the literature, which are of independent interest.

View on arXiv
Comments on this paper