6
3

Finite sample improvement of Akaike's Information Criterion

Abstract

We emphasize that it is possible to improve the principle of unbiased risk estimation for model selection by addressing excess risk deviations in the design of penalization procedures. Indeed, we propose a modification of Akaike's Information Criterion that avoids overfitting, even when the sample size is small. We call this correction an over-penalization procedure. As proof of concept, we show the nonasymptotic optimality of our histogram selection procedure in density estimation by establishing sharp oracle inequalities for the Kullback-Leibler divergence. One of the main features of our theoretical results is that they include the estimation of unbounded logdensities. To do so, we prove several analytical and probabilistic lemmas that are of independent interest. In an experimental study, we also demonstrate state-of-the-art performance of our over-penalization criterion for bin size selection, in particular outperforming AICc procedure.

View on arXiv
Comments on this paper