ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.09859
72
89
v1v2 (latest)

Generalization Bounds for Uniformly Stable Algorithms

24 December 2018
Vitaly Feldman
J. Vondrák
ArXiv (abs)PDFHTML
Abstract

Uniform stability of a learning algorithm is a classical notion of algorithmic stability introduced to derive high-probability bounds on the generalization error (Bousquet and Elisseeff, 2002). Specifically, for a loss function with range bounded in [0,1][0,1][0,1], the generalization error of γ\gammaγ-uniformly stable learning algorithm on nnn samples is known to be at most O((γ+1/n)nlog⁡(1/δ))O((\gamma +1/n) \sqrt{n \log(1/\delta)})O((γ+1/n)nlog(1/δ)​) with probability at least 1−δ1-\delta1−δ. Unfortunately, this bound does not lead to meaningful generalization bounds in many common settings where γ≥1/n\gamma \geq 1/\sqrt{n}γ≥1/n​. At the same time the bound is known to be tight only when γ=O(1/n)\gamma = O(1/n)γ=O(1/n). Here we prove substantially stronger generalization bounds for uniformly stable algorithms without any additional assumptions. First, we show that the generalization error in this setting is at most O((γ+1/n)log⁡(1/δ))O(\sqrt{(\gamma + 1/n) \log(1/\delta)})O((γ+1/n)log(1/δ)​) with probability at least 1−δ1-\delta1−δ. In addition, we prove a tight bound of O(γ2+1/n)O(\gamma^2 + 1/n)O(γ2+1/n) on the second moment of the generalization error. The best previous bound on the second moment of the generalization error is O(γ+1/n)O(\gamma + 1/n)O(γ+1/n). Our proofs are based on new analysis techniques and our results imply substantially stronger generalization guarantees for several well-studied algorithms.

View on arXiv
Comments on this paper