36
9

Uniform Consistency of the Highly Adaptive Lasso Estimator of Infinite Dimensional Parameters

Abstract

Consider the case that we observe nn independent and identically distributed copies of a random variable with a probability distribution known to be an element of a specified statistical model. We are interested in estimating an infinite dimensional target parameter that minimizes the expectation of a specified loss function. In \cite{generally_efficient_TMLE} we defined an estimator that minimizes the empirical risk over all multivariate real valued cadlag functions with variation norm bounded by some constant MM in the parameter space, and selects MM with cross-validation. We referred to this estimator as the Highly-Adaptive-Lasso estimator due to the fact that the constrained can be formulated as a bound MM on the sum of the coefficients a linear combination of a very large number of basis functions. Specifically, in the case that the target parameter is a conditional mean, then it can be implemented with the standard LASSO regression estimator. In \cite{generally_efficient_TMLE} we proved that the HAL-estimator is consistent w.r.t. the (quadratic) loss-based dissimilarity at a rate faster than n1/2n^{-1/2} (i.e., faster than n1/4n^{-1/4} w.r.t. a norm), even when the parameter space is completely nonparametric. The only assumption required for this rate is that the true parameter function has a finite variation norm. The loss-based dissimilarity is often equivalent with the square of an L2(P0)L^2(P_0)-type norm. In this article, we establish that under some weak continuity condition, the HAL-estimator is also uniformly consistent.

View on arXiv
Comments on this paper