ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.04265
19
64

In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors

9 December 2019
Jeffrey Negrea
Gintare Karolina Dziugaite
Daniel M. Roy
    AI4CE
ArXivPDFHTML
Abstract

We propose to study the generalization error of a learned predictor h^\hat hh^ in terms of that of a surrogate (potentially randomized) predictor that is coupled to h^\hat hh^ and designed to trade empirical risk for control of generalization error. In the case where h^\hat hh^ interpolates the data, it is interesting to consider theoretical surrogate classifiers that are partially derandomized or rerandomized, e.g., fit to the training data but with modified label noise. We also show that replacing h^\hat hh^ by its conditional distribution with respect to an arbitrary σ\sigmaσ-field is a convenient way to derandomize. We study two examples, inspired by the work of Nagarajan and Kolter (2019) and Bartlett et al. (2019), where the learned classifier h^\hat hh^ interpolates the training data with high probability, has small risk, and, yet, does not belong to a nonrandom class with a tight uniform bound on two-sided generalization error. At the same time, we bound the risk of h^\hat hh^ in terms of surrogates constructed by conditioning and denoising, respectively, and shown to belong to nonrandom classes with uniformly small generalization error.

View on arXiv
Comments on this paper