Here we present an expository, general analysis of valid post-selection or post-regularization inference about a low-dimensional target parameter, , in the presence of a very high-dimensional nuisance parameter, , which is estimated using modern selection or regularization methods. Our analysis relies on high-level, easy-to-interpret conditions that allow one to clearly see the structures needed for achieving valid post-regularization inference. Simple, readily verifiable sufficient conditions are provided for a class of affine-quadratic models. We focus our discussion on estimation and inference procedures based on using the empirical analog of theoretical equations M(\alpha, \eta)=0 which identify . Within this structure, we show that setting up such equations in a manner such that the orthogonality/immunization condition \partial_\eta M(\alpha, \eta) = 0 at the true parameter values is satisfied, coupled with plausible conditions on the smoothness of and the quality of the estimator , guarantees that inference on for the main parameter based on testing or point estimation methods discussed below will be regular despite selection or regularization biases occurring in estimation of . In particular, the estimator of will often be uniformly consistent at the root- rate and uniformly asymptotically normal even though estimators will generally not be asymptotically linear and regular. The uniformity holds over large classes of models that do not impose highly implausible "beta-min" conditions. We also show that inference can be carried out by inverting tests formed from Neyman's (orthogonal score) statistics.
View on arXiv