Regularized Orthogonal Machine Learning for Nonlinear Semiparametric Models

This paper contributes to the literature on high-dimensional sparse M-estimation by allowing the loss function to depend on a functional nuisance parameter, which we estimate by modern machine learning tools. For a class of single-index conditional moment restrictions (CMRs), we explicitly derive the loss function. We first adjust the moment function so that the gradient of the future M-estimator loss is insensitive (formally, Neyman-orthogonal) with respect to the first-stage regularization bias. We then take the loss function to be an indefinite integral of the adjusted moment function with respect to the single-index. The proposed l1-regularized M-estimator achieves the oracle convergence rate, where the oracle knows the nuisance parameter and solves only the parametric problem. Our framework nests a novel approach to modeling heterogeneous treatment effects with a binary dependent variable. In addition, we apply our results to conditional moment models with missing data and static games of incomplete information. Finally, we generalize our results to generic extremum estimation with a nuisance component.
View on arXiv