ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2001.07212
70
2

Generalization Bounds for High-dimensional M-estimation under Sparsity Constraint

20 January 2020
Xiao-Tong Yuan
Ping Li
ArXiv (abs)PDFHTML
Abstract

The ℓ0\ell_0ℓ0​-constrained empirical risk minimization (ℓ0\ell_0ℓ0​-ERM) is a promising tool for high-dimensional statistical estimation. The existing analysis of ℓ0\ell_0ℓ0​-ERM estimator is mostly on parameter estimation and support recovery consistency. From the perspective of statistical learning, another fundamental question is how well the ℓ0\ell_0ℓ0​-ERM estimator would perform on unseen samples. The answer to this question is important for understanding the learnability of such a non-convex (and also NP-hard) M-estimator but still relatively under explored. In this paper, we investigate this problem and develop a generalization theory for ℓ0\ell_0ℓ0​-ERM. We establish, in both white-box and black-box statistical regimes, a set of generalization gap and excess risk bounds for ℓ0\ell_0ℓ0​-ERM to characterize its sparse prediction and optimization capability. Our theory mainly reveals three findings: 1) tighter generalization bounds can be attained by ℓ0\ell_0ℓ0​-ERM than those of ℓ2\ell_2ℓ2​-ERM if the risk function is (with high probability) restricted strongly convex; 2) tighter uniform generalization bounds can be established for ℓ0\ell_0ℓ0​-ERM than the conventional dense ERM; and 3) sparsity level invariant bounds can be established by imposing additional strong-signal conditions to ensure the stability of ℓ0\ell_0ℓ0​-ERM. In light of these results, we further provide generalization guarantees for the Iterative Hard Thresholding (IHT) algorithm which serves as one of the most popular greedy pursuit methods for approximately solving ℓ0\ell_0ℓ0​-ERM. Numerical evidence is provided to confirm our theoretical predictions when implied to sparsity-constrained linear regression and logistic regression models.

View on arXiv
Comments on this paper