ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1605.02268
43
9
v1v2 (latest)

Rate-Distortion Bounds on Bayes Risk in Supervised Learning

8 May 2016
M. Nokleby
Ahmad Beirami
Robert Calderbank
ArXiv (abs)PDFHTML
Abstract

An information-theoretic framework is presented for estimating the number of labeled samples needed to train a classifier in a parametric Bayesian setting. Ideas from rate-distortion theory are used to derive bounds on the average L1L_1L1​ or L∞L_\inftyL∞​ distance between the learned classifier and the true maximum a posteriori classifier---which are well-established surrogates for the excess classification error due to imperfect learning---in terms of the differential entropy of the posterior distribution, the Fisher information of the parametric family, and the number of training samples available. The maximum {\em a posteriori} classifier is viewed as a random source, labeled training data are viewed as a finite-rate encoding of the source, and the L1L_1L1​ or L∞L_\inftyL∞​ Bayes risk is viewed as the average distortion. The result is a complementary framework to the well-known probably approximately correct (PAC) framework. PAC bounds characterize worst-case learning performance of a family of classifiers whose complexity is captured by the Vapnik-Chervonenkis (VC) dimension. The rate-distortion framework, on the other hand, characterizes the average-case performance of a family of data distributions in terms of a quantity called the interpolation dimension, which represents the complexity of the family of data distributions. The resulting bounds do not suffer from the pessimism typical of the PAC framework, particularly when the training set is small. The framework also naturally accommodates multi-class settings. Furthermore, Monte Carlo methods provide accurate estimates of the bounds even for complicated distributions. The effectiveness of this framework is demonstrated in both a binary and multi-class Gaussian setting.

View on arXiv
Comments on this paper