ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1605.00252
23
71

Fast Rates for General Unbounded Loss Functions: from ERM to Generalized Bayes

1 May 2016
Peter Grünwald
Nishant A. Mehta
ArXivPDFHTML
Abstract

We present new excess risk bounds for general unbounded loss functions including log loss and squared loss, where the distribution of the losses may be heavy-tailed. The bounds hold for general estimators, but they are optimized when applied to η\etaη-generalized Bayesian, MDL, and empirical risk minimization estimators. In the case of log loss, the bounds imply convergence rates for generalized Bayesian inference under misspecification in terms of a generalization of the Hellinger metric as long as the learning rate η\etaη is set correctly. For general loss functions, our bounds rely on two separate conditions: the vvv-GRIP (generalized reversed information projection) conditions, which control the lower tail of the excess loss; and the newly introduced witness condition, which controls the upper tail. The parameter vvv in the vvv-GRIP conditions determines the achievable rate and is akin to the exponent in the Tsybakov margin condition and the Bernstein condition for bounded losses, which the vvv-GRIP conditions generalize; favorable vvv in combination with small model complexity leads to O~(1/n)\tilde{O}(1/n)O~(1/n) rates. The witness condition allows us to connect the excess risk to an "annealed" version thereof, by which we generalize several previous results connecting Hellinger and R\ényi divergence to KL divergence.

View on arXiv
Comments on this paper