120
42

PAC-Bayes under potentially heavy tails

Abstract

We derive PAC-Bayesian learning guarantees for heavy-tailed losses, and demonstrate that the resulting optimal Gibbs posterior enjoys much stronger guarantees than are available for existing randomized learning algorithms. Our core technique itself makes use of PAC-Bayesian inequalities in order to derive a robust risk estimator, which by design is easy to compute. In particular, only assuming that the variance of the loss distribution is finite, the learning algorithm derived from this estimator enjoys nearly sub-Gaussian statistical error.

View on arXiv
Comments on this paper