Concentration Based Inference for High Dimensional (Generalized)
Regression Models: New Phenomena in Hypothesis Testing
We develop simple and non-asymptotically justified methods for hypothesis testing about the coefficients () in the high dimensional (generalized) regression models where can exceed the sample size . We consider against , where can be as large as and can be nonlinear in . Our test statistics is based on the sample score vector evaluated at an estimate that satisfies , where is the prespecified Type I error. We provide nonasymptotic control on the Type I and Type II errors for the score test, as well as confidence regions. By exploiting the concentration phenomenon in Lipschitz functions, the key component reflecting the "dimension complexity" in our non-asymptotic thresholds uses a Monte-Carlo approximation to "mimic" the expectation that is concentrated around and automatically captures the dependencies between the coordinates. The novelty of our methods is that their validity does not rely on good behavior of or even nonasymptotically or asymptotically. Most interestingly, we discover phenomena that are opposite from the existing literature: (1) More restrictions (larger ) in make our procedures more powerful, (2) whether is sparse or not, it is possible for our procedures to detect alternatives with probability at least when and , (3) the coverage probability of our procedures is not affected by how sparse is. The proposed procedures are evaluated with simulation studies, where the empirical evidence supports our key insights.
View on arXiv