83
24

On the optimality of sliced inverse regression in high dimensions

Abstract

The central subspace of a pair of random variables (y,x)Rp+1(y,x) \in \mathbb{R}^{p+1} is the minimal subspace S\mathcal{S} such that yxPSxy \perp \hspace{-2mm} \perp x\mid P_{\mathcal{S}}x. In this paper, we consider the minimax rate of estimating the central space of the multiple index models y=f(β1τx,β2τx,...,βdτx,ϵ)y=f(\beta_{1}^{\tau}x,\beta_{2}^{\tau}x,...,\beta_{d}^{\tau}x,\epsilon) with at most ss active predictors where xN(0,Ip)x \sim N(0,I_{p}). We first introduce a large class of models depending on the smallest non-zero eigenvalue λ\lambda of var(E[xy])var(\mathbb{E}[x|y]), over which we show that an aggregated estimator based on the SIR procedure converges at rate d((sd+slog(ep/s))/(nλ))d\wedge((sd+s\log(ep/s))/(n\lambda)). We then show that this rate is optimal in two scenarios: the single index models; and the multiple index models with fixed central dimension dd and fixed λ\lambda. By assuming a technical conjecture, we can show that this rate is also optimal for multiple index models with bounded dimension of the central space. We believe that these (conditional) optimal rate results bring us meaningful insights of general SDR problems in high dimensions.

View on arXiv
Comments on this paper