ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.04372
13
16

Asymptotic errors for convex penalized linear regression beyond Gaussian matrices

11 February 2020
Cédric Gerbelot
A. Abbara
Florent Krzakala
ArXivPDFHTML
Abstract

We consider the problem of learning a coefficient vector x0x_{0}x0​ in RNR^{N}RN from noisy linear observations y=Fx0+wy=Fx_{0}+wy=Fx0​+w in RMR^{M}RM in the high dimensional limit M,NM,NM,N to infinity with α=M/N\alpha=M/Nα=M/N fixed. We provide a rigorous derivation of an explicit formula -- first conjectured using heuristic methods from statistical physics -- for the asymptotic mean squared error obtained by penalized convex regression estimators such as the LASSO or the elastic net, for a class of very generic random matrices corresponding to rotationally invariant data matrices with arbitrary spectrum. The proof is based on a convergence analysis of an oracle version of vector approximate message-passing (oracle-VAMP) and on the properties of its state evolution equations. Our method leverages on and highlights the link between vector approximate message-passing, Douglas-Rachford splitting and proximal descent algorithms, extending previous results obtained with i.i.d. matrices for a large class of problems. We illustrate our results on some concrete examples and show that even though they are asymptotic, our predictions agree remarkably well with numerics even for very moderate sizes.

View on arXiv
Comments on this paper