ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.10013
96
1
v1v2v3 (latest)

Optimal False Discovery Control of Minimax Estimator

25 December 2018
Qifan Song
Guang Cheng
ArXiv (abs)PDFHTML
Abstract

In the analysis of high dimensional regression models, there are two important objectives: statistical estimation and variable selection. In literature, most works focus on either optimal estimation, e.g., minimax L2L_2L2​ error, or optimal selection behavior, e.g., minimax Hamming loss. However in this study, we investigate the subtle interplay between the estimation accuracy and selection behavior. Our result shows that an estimator's L2L_2L2​ error rate critically depends on its performance of type I error control. Essentially, the minimax convergence rate of false discovery rate over all rate-minimax estimators is a polynomial of the true sparsity ratio. This result helps us to characterize the false positive control of rate-optimal estimators under different sparsity regimes. More specifically, under near-linear sparsity, the number of yielded false positives always explodes to infinity under worst scenario, but the false discovery rate still converges to 0; under linear sparsity, even the false discovery rate doesn't asymptotically converge to 0. On the other side, in order to asymptotically eliminate all false discoveries, the estimator must be sub-optimal in terms of its convergence rate. This work attempts to offer rigorous analysis on the incompatibility phenomenon between selection consistency and rate-minimaxity observed in the high dimensional regression literature.

View on arXiv
Comments on this paper