ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1610.07193
71
72

Simpler PAC-Bayesian Bounds for Hostile Data

23 October 2016
Pierre Alquier
Benjamin Guedj
ArXivPDFHTML
Abstract

PAC-Bayesian learning bounds are of the utmost interest to the learning community. Their role is to connect the generalization ability of an aggregation distribution ρ\rhoρ to its empirical risk and to its Kullback-Leibler divergence with respect to some prior distribution π\piπ. Unfortunately, most of the available bounds typically rely on heavy assumptions such as boundedness and independence of the observations. This paper aims at relaxing these constraints and provides PAC-Bayesian learning bounds that hold for dependent, heavy-tailed observations (hereafter referred to as \emph{hostile data}). In these bounds the Kullack-Leibler divergence is replaced with a general version of Csisz\ár's fff-divergence. We prove a general PAC-Bayesian bound, and show how to use it in various hostile settings.

View on arXiv
Comments on this paper