Minimal penalty for Goldenshluger-Lepski method

Abstract
This paper is concerned with adaptive nonparametric estimation using the Goldenshluger-Lepski methodology. This method is designed to select an estimator among a collection by minimizing with and a variance term. In the case of density estimation with kernel estimators, it is shown that the procedure fails if the variance term is chosen too small: this gives for the first time a minimal penalty for the Goldenshluger-Lepski methodology. Some simulations illustrate the theoretical results.
View on arXivComments on this paper