85
37

Minimal penalty for Goldenshluger-Lepski method

Abstract

This paper is concerned with adaptive nonparametric estimation using the Goldenshluger-Lepski methodology. This method is designed to select an estimator among a collection (f^_h)_hH(\hat f\_h)\_{h\in H} by minimizing B(h)+V(h)B(h)+V(h) with B(h)=sup{[f^_hf^_hV(h)]_+,hH}B(h)=\sup\{[\|\hat f\_{h'}- \hat f\_{h}\|-V(h')]\_+, \: h'\in H\} and V(h)V(h) a variance term. In the case of density estimation with kernel estimators, it is shown that the procedure fails if the variance term is chosen too small: this gives for the first time a minimal penalty for the Goldenshluger-Lepski methodology. Some simulations illustrate the theoretical results.

View on arXiv
Comments on this paper