ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24117
33
0

Bounds on the Excess Minimum Risk via Generalized Information Divergence Measures

30 May 2025
Ananya Omanwar
F. Alajaji
Tamás Linder
ArXiv (abs)PDFHTML
Main:22 Pages
3 Figures
Bibliography:3 Pages
Abstract

Given finite-dimensional random vectors YYY, XXX, and ZZZ that form a Markov chain in that order (i.e., Y→X→ZY \to X \to ZY→X→Z), we derive upper bounds on the excess minimum risk using generalized information divergence measures. Here, YYY is a target vector to be estimated from an observed feature vector XXX or its stochastically degraded version ZZZ. The excess minimum risk is defined as the difference between the minimum expected loss in estimating YYY from XXX and from ZZZ. We present a family of bounds that generalize the mutual information based bound of Györfi et al. (2023), using the Rényi and α\alphaα-Jensen-Shannon divergences, as well as Sibson's mutual information. Our bounds are similar to those developed by Modak et al. (2021) and Aminian et al. (2024) for the generalization error of learning algorithms. However, unlike these works, our bounds do not require the sub-Gaussian parameter to be constant and therefore apply to a broader class of joint distributions over YYY, XXX, and ZZZ. We also provide numerical examples under both constant and non-constant sub-Gaussianity assumptions, illustrating that our generalized divergence based bounds can be tighter than the one based on mutual information for certain regimes of the parameter α\alphaα.

View on arXiv
@article{omanwar2025_2505.24117,
  title={ Bounds on the Excess Minimum Risk via Generalized Information Divergence Measures },
  author={ Ananya Omanwar and Fady Alajaji and Tamás Linder },
  journal={arXiv preprint arXiv:2505.24117},
  year={ 2025 }
}
Comments on this paper