When is it worthwhile to jackknife? Breaking the quadratic barrier for
Z-estimators
Resampling methods are especially well-suited to inference with estimators that provide only "black-box'' access. Jackknife is a form of resampling, widely used for bias correction and variance estimation, that is well-understood under classical scaling where the sample size grows for a fixed problem. We study its behavior in application to estimating functionals using high-dimensional -estimators, allowing both the sample size and problem dimension to diverge. We begin showing that the plug-in estimator based on the -estimate suffers from a quadratic breakdown: while it is -consistent and asymptotically normal whenever , it fails for a broad class of problems whenever . We then show that under suitable regularity conditions, applying a jackknife correction yields an estimate that is -consistent and asymptotically normal whenever . This provides strong motivation for the use of jackknife in high-dimensional problems where the dimension is moderate relative to sample size. We illustrate consequences of our general theory for various specific -estimators, including non-linear functionals in linear models; generalized linear models; and the inverse propensity score weighting (IPW) estimate for the average treatment effect, among others.
View on arXiv