ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.08791
47
19
v1v2 (latest)

On the asymptotic properties of SLOPE

23 August 2019
Michał Kos
M. Bogdan
ArXiv (abs)PDFHTML
Abstract

Sorted L-One Penalized Estimator (SLOPE) is a relatively new convex optimization procedure for selecting predictors in large data bases. Contrary to LASSO, SLOPE has been proved to be asymptotically minimax in the context of sparse high-dimensional generalized linear models. Additionally, in case when the design matrix is orthogonal, SLOPE with the sequence of tuning parameters λBH\lambda^{BH}λBH, corresponding to the sequence of decaying thresholds for the Benjamini-Hochberg multiple testing correction, provably controls False Discovery Rate in the multiple regression model. In this article we provide new asymptotic results on the properties of SLOPE when the elements of the design matrix are iid random variables from the Gaussian distribution. Specifically, we provide the conditions, under which the asymptotic FDR of SLOPE based on the sequence λBH\lambda^{BH}λBH converges to zero and the power converges to 1. We illustrate our theoretical asymptotic results with extensive simulation study. We also provide precise formulas describing FDR of SLOPE under different loss functions, which sets the stage for future results on the model selection properties of SLOPE and its extensions.

View on arXiv
Comments on this paper