ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.01983
11
0

Testing (Infinitely) Many Zero Restrictions

3 November 2020
Jonathan B. Hill
ArXivPDFHTML
Abstract

This paper proposes a max-test for testing (possibly infinitely) many zero parameter restrictions in an extremum estimation framework. The test statistic is formed by estimating key parameters one at a time based on many empirical loss functions that map from a low dimension parameter space, and choosing the largest in absolute value from these individually estimated parameters. The parsimoniously parametrized loss identify whether the original parameter of interest is or is not zero. Estimating fixed low dimension sub-parameters ensures greater estimator accuracy, does not require a sparsity assumption, and using only the largest in a sequence of weighted estimators reduces test statistic complexity and therefore estimation error, ensuring sharper size and greater power in practice. Weights allow for standardization in order to control for estimator dispersion. In a nonlinear parametric regression framework we provide a parametric wild bootstrap for p-value computation without directly requiring the max-statistic's limit distribution. A simulation experiment shows the max-test dominates a conventional bootstrapped test.

View on arXiv
Comments on this paper