ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.05480
54
3
v1v2 (latest)

First order expansion of convex regularized estimators

12 October 2019
Pierre C. Bellec
Arun K. Kuchibhotla
ArXiv (abs)PDFHTML
Abstract

We consider first order expansions of convex penalized estimators in high-dimensional regression problems with random designs. Our setting includes linear regression and logistic regression as special cases. For a given penalty function hhh and the corresponding penalized estimator β^\hat\betaβ^​, we construct a quantity η\etaη, the first order expansion of β^\hat\betaβ^​, such that the distance between β^\hat\betaβ^​ and η\etaη is an order of magnitude smaller than the estimation error ∥β^−β∗∥\|\hat{\beta} - \beta^*\|∥β^​−β∗∥. In this sense, the first order expansion η\etaη can be thought of as a generalization of influence functions from the mathematical statistics literature to regularized estimators in high-dimensions. Such first order expansion implies that the risk of β^\hat{\beta}β^​ is asymptotically the same as the risk of η\etaη which leads to a precise characterization of the MSE of β^\hat\betaβ^​; this characterization takes a particularly simple form for isotropic design. Such first order expansion also leads to inference results based on β^\hat{\beta}β^​. We provide sufficient conditions for the existence of such first order expansion for three regularizers: the Lasso in its constrained form, the lasso in its penalized form, and the Group-Lasso. The results apply to general loss functions under some conditions and those conditions are satisfied for the squared loss in linear regression and for the logistic loss in the logistic model.

View on arXiv
Comments on this paper