ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 0811.0802
146
54
v1v2v3v4 (latest)

Optimal cross-validation in density estimation

5 November 2008
Alain Celisse
ArXiv (abs)PDFHTML
Abstract

The performance of cross-validation (CV) is analyzed in two contexts: (i) risk estimation and (ii) model selection in the density estimation framework. The main focus is given to one CV algorithm called leave-ppp-out (Lpo), where ppp denotes the cardinality of the test set. Closed-form expressions are settled for the Lpo estimator of the risk of projection estimators, which makes V-fold cross-validation completely useless. From a theoretical point of view, these closed-form expressions enable to study the Lpo performances in terms of risk estimation. For instance, the optimality of leave-one-out (Loo), that is Lpo with p=1p=1p=1, is proved among CV procedures. Two model selection frameworks are also considered: estimation, as opposed to identification. Unlike risk estimation, Loo is proved to be suboptimal as a model selection procedure. In the estimation framework with finite sample size nnn, optimality is achieved for ppp large enough (with p/n=o(1)p/n =o(1)p/n=o(1)) to balance overfitting. A link is also identified between the optimal ppp and the structure of the model collection. These theoretical results are strongly supported by simulation experiments. When performing identification, model consistency is also proved for Lpo with p/n→1p/n\to 1p/n→1 as n→+∞n\to +\inftyn→+∞.

View on arXiv
Comments on this paper