ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.10888
14
6

Nonregular and Minimax Estimation of Individualized Thresholds in High Dimension with Binary Responses

26 May 2019
Huijie Feng
Y. Ning
Jiwei Zhao
ArXiv (abs)PDFHTML
Abstract

Given a large number of covariates ZZZ, we consider the estimation of a high-dimensional parameter θ\thetaθ in an individualized linear threshold θTZ\theta^T ZθTZ for a continuous variable XXX, which minimizes the disagreement between sign(X−θTZ)\text{sign}(X-\theta^TZ)sign(X−θTZ) and a binary response YYY. While the problem can be formulated into the M-estimation framework, minimizing the corresponding empirical risk function is computationally intractable due to discontinuity of the sign function. Moreover, estimating θ\thetaθ even in the fixed-dimensional setting is known as a nonregular problem leading to nonstandard asymptotic theory. To tackle the computational and theoretical challenges in the estimation of the high-dimensional parameter θ\thetaθ, we propose an empirical risk minimization approach based on a regularized smoothed loss function. The statistical and computational trade-off of the algorithm is investigated. Statistically, we show that the finite sample error bound for estimating θ\thetaθ in ℓ2\ell_2ℓ2​ norm is (slog⁡d/n)β/(2β+1)(s\log d/n)^{\beta/(2\beta+1)}(slogd/n)β/(2β+1), where ddd is the dimension of θ\thetaθ, sss is the sparsity level, nnn is the sample size and β\betaβ is the smoothness of the conditional density of XXX given the response YYY and the covariates ZZZ. The convergence rate is nonstandard and slower than that in the classical Lasso problems. Furthermore, we prove that the resulting estimator is minimax rate optimal up to a logarithmic factor. The Lepski's method is developed to achieve the adaption to the unknown sparsity sss and smoothness β\betaβ. Computationally, an efficient path-following algorithm is proposed to compute the solution path. We show that this algorithm achieves geometric rate of convergence for computing the whole path. Finally, we evaluate the finite sample performance of the proposed estimator in simulation studies and a real data analysis.

View on arXiv
Comments on this paper