Given finite-dimensional random vectors , , and that form a Markov chain in that order (i.e., ), we derive upper bounds on the excess minimum risk using generalized information divergence measures. Here, is a target vector to be estimated from an observed feature vector or its stochastically degraded version . The excess minimum risk is defined as the difference between the minimum expected loss in estimating from and from . We present a family of bounds that generalize the mutual information based bound of Györfi et al. (2023), using the Rényi and -Jensen-Shannon divergences, as well as Sibson's mutual information. Our bounds are similar to those developed by Modak et al. (2021) and Aminian et al. (2024) for the generalization error of learning algorithms. However, unlike these works, our bounds do not require the sub-Gaussian parameter to be constant and therefore apply to a broader class of joint distributions over , , and . We also provide numerical examples under both constant and non-constant sub-Gaussianity assumptions, illustrating that our generalized divergence based bounds can be tighter than the one based on mutual information for certain regimes of the parameter .
View on arXiv@article{omanwar2025_2505.24117, title={ Bounds on the Excess Minimum Risk via Generalized Information Divergence Measures }, author={ Ananya Omanwar and Fady Alajaji and Tamás Linder }, journal={arXiv preprint arXiv:2505.24117}, year={ 2025 } }