ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.09345
8
19

Does SLOPE outperform bridge regression?

20 September 2019
Shuaiwen Wang
Haolei Weng
A. Maleki
ArXivPDFHTML
Abstract

A recently proposed SLOPE estimator (arXiv:1407.3824) has been shown to adaptively achieve the minimax ℓ2\ell_2ℓ2​ estimation rate under high-dimensional sparse linear regression models (arXiv:1503.08393). Such minimax optimality holds in the regime where the sparsity level kkk, sample size nnn, and dimension ppp satisfy k/p→0k/p \rightarrow 0k/p→0, klog⁡p/n→0k\log p/n \rightarrow 0klogp/n→0. In this paper, we characterize the estimation error of SLOPE under the complementary regime where both kkk and nnn scale linearly with ppp, and provide new insights into the performance of SLOPE estimators. We first derive a concentration inequality for the finite sample mean square error (MSE) of SLOPE. The quantity that MSE concentrates around takes a complicated and implicit form. With delicate analysis of the quantity, we prove that among all SLOPE estimators, LASSO is optimal for estimating kkk-sparse parameter vectors that do not have tied non-zero components in the low noise scenario. On the other hand, in the large noise scenario, the family of SLOPE estimators are sub-optimal compared with bridge regression such as the Ridge estimator.

View on arXiv
Comments on this paper