Learning without Concentration
Annual Conference Computational Learning Theory (COLT), 2014
Abstract
We obtain sharp bounds on the performance of Empirical Risk Minimization performed in a convex class and with respect to the squared loss, without any boundedness assumptions on class members or on the target. Rather than resorting to a concentration-based argument, the method relies on a `small-ball' assumption and thus holds for heavy-tailed sampling and heavy-tailed targets. Moreover, the resulting estimates scale correctly with the `noise'. When applied to the classical, bounded scenario, the method always improves the known estimates.
View on arXivComments on this paper
