ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1205.3703
36
14

Generic chaining and the l1-penalty

16 May 2012
Sara van de Geer
ArXivPDFHTML
Abstract

We address the choice of the tuning parameter λ\lambdaλ in ℓ1\ell_1ℓ1​-penalized M-estimation. Our main concern is models which are highly nonlinear, such as the Gaussian mixture model. The number of parameters ppp is moreover large, possibly larger than the number of observations nnn. The generic chaining technique of Talagrand[2005] is tailored for this problem. It leads to the choice λ≍log⁡p/n\lambda \asymp \sqrt {\log p / n}λ≍logp/n​, as in the standard Lasso procedure (which concerns the linear model and least squares loss).

View on arXiv
Comments on this paper