Sharper lower bounds on the performance of the empirical risk
minimization algorithm
Abstract
We present an argument based on the multidimensional and the uniform central limit theorems, proving that, under some geometrical assumptions between the target function and the learning class , the excess risk of the empirical risk minimization algorithm is lower bounded by \[\frac{\mathbb{E}\sup_{q\in Q}G_q}{\sqrt{n}}\delta,\] where is a canonical Gaussian process associated with (a well chosen subset of ) and is a parameter governing the oscillations of the empirical excess risk function over a small ball in .
View on arXivComments on this paper
