ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1511.02270
45
10
v1v2 (latest)

Signed Support Recovery for Single Index Models in High-Dimensions

7 November 2015
Matey Neykov
Q. Lin
Jun S. Liu
ArXiv (abs)PDFHTML
Abstract

In this paper we study the support recovery problem for single index models Y=f(X⊺β,ε)Y=f(\boldsymbol{X}^{\intercal} \boldsymbol{\beta},\varepsilon)Y=f(X⊺β,ε), where fff is an unknown link function, X∼Np(0,Ip)\boldsymbol{X}\sim N_p(0,\mathbb{I}_{p})X∼Np​(0,Ip​) and β\boldsymbol{\beta}β is an sss-sparse unit vector such that βi∈{±1s,0}\boldsymbol{\beta}_{i}\in \{\pm\frac{1}{\sqrt{s}},0\}βi​∈{±s​1​,0}. In particular, we look into the performance of two computationally inexpensive algorithms: (a) the diagonal thresholding sliced inverse regression (DT-SIR) introduced by Lin et al. (2015); and (b) a semi-definite programming (SDP) approach inspired by Amini & Wainwright (2008). When s=O(p1−δ)s=O(p^{1-\delta})s=O(p1−δ) for some δ>0\delta>0δ>0, we demonstrate that both procedures can succeed in recovering the support of β\boldsymbol{\beta}β as long as the rescaled sample size κ=nslog⁡(p−s)\kappa=\frac{n}{s\log(p-s)}κ=slog(p−s)n​ is larger than a certain critical threshold. On the other hand, when κ\kappaκ is smaller than a critical value, any algorithm fails to recover the support with probability at least 12\frac{1}{2}21​ asymptotically. In other words, we demonstrate that both DT-SIR and the SDP approach are optimal (up to a scalar) for recovering the support of β\boldsymbol{\beta}β in terms of sample size. We provide extensive simulations, as well as a real dataset application to help verify our theoretical observations.

View on arXiv
Comments on this paper