ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.08885
61
28
v1v2v3 (latest)

De-Biasing The Lasso With Degrees-of-Freedom Adjustment

24 February 2019
Pierre C. Bellec
Cun-Hui Zhang
ArXiv (abs)PDFHTML
Abstract

This paper studies schemes to de-bias the Lasso in a linear model y=Xβ+ϵy=X\beta+\epsilony=Xβ+ϵ where the goal is to construct confidence intervals for a0Tβa_0^T\betaa0T​β in a direction a0a_0a0​, where XXX has iid N(0,Σ)N(0,\Sigma)N(0,Σ) rows. We show that previously analyzed propositions to de-bias the Lasso require a modification in order to enjoy efficiency in a full range of sparsity. This modification takes the form of a degrees-of-freedom adjustment that accounts for the dimension of the model selected by Lasso. Let s0s_0s0​ be the true sparsity. If Σ\SigmaΣ is known and the ideal score vector proportional to XΣ−1a0X\Sigma^{-1}a_0XΣ−1a0​ is used, the unadjusted de-biasing schemes proposed previously enjoy efficiency if s0⋘n2/3s_0\lll n^{2/3}s0​⋘n2/3. However, if s0⋙n2/3s_0\ggg n^{2/3}s0​⋙n2/3, the unadjusted schemes cannot be efficient in certain a0a_0a0​: then it is necessary to modify existing procedures by a degrees-of-freedom adjustment. This modification grants asymptotic efficiency for any a0a_0a0​ when s0/p→0s_0/p\to 0s0​/p→0 and s0log⁡(p/s0)/n→0s_0\log(p/s_0)/n \to 0s0​log(p/s0​)/n→0. If Σ\SigmaΣ is unknown, efficiency is granted for general a0a_0a0​ when \frac{s_0\log p}{n}+\min\Big\{\frac{s_\Omega\log p}{n},\frac{\|\Sigma^{-1}a_0\|_1\sqrt{\log p}}{\|\Sigma^{-1/2}a_0\|_2 \sqrt n}\Big\}+\frac{\min(s_\Omega,s_0)\log p}{\sqrt n}\to0 where sΩ=∥Σ−1a0∥0s_\Omega=\|\Sigma^{-1}a_0\|_0sΩ​=∥Σ−1a0​∥0​, provided that the de-biased estimate is modified with the degrees-of-freedom adjustment. The dependence in s0,sΩs_0,s_\Omegas0​,sΩ​ and ∥Σ−1a0∥1\|\Sigma^{-1}a_0\|_1∥Σ−1a0​∥1​ is optimal. Our estimated score vector provides a novel methodology to handle dense a0a_0a0​. Our analysis shows that the degrees-of-freedom adjustment is not needed when the initial bias in direction a0a_0a0​ is small, which is granted under stringent conditions on Σ−1\Sigma^{-1}Σ−1. The main proof argument is an interpolation path similar to that typically used to derive Slepian's lemma. It yields a new ℓ∞\ell_\inftyℓ∞​ error bound for the Lasso which is of independent interest.

View on arXiv
Comments on this paper