186

Sharper Concentration Inequalities for Multi-Graph Dependent Variables

Main:9 Pages
1 Figures
Bibliography:4 Pages
6 Tables
Appendix:26 Pages
Abstract

In multi-task learning (MTL) with each task involving graph-dependent data, generalization results of existing theoretical analyses yield a sub-optimal risk bound of O(1n)O(\frac{1}{\sqrt{n}}), where nn is the number of trainingthis http URLis attributed to the lack of a foundational sharper concentration inequality for multi-graph dependent random variables. To fill this gap, this paper proposes a new corresponding Bennett inequality, enabling the derivation of a sharper risk bound of O(lognn)O(\frac{\log n}{n}). Specifically, building on the proposed Bennett inequality, we propose a new corresponding Talagrand inequality for the empirical process and further develop an analytical framework of the local Rademacher complexity to enhance theoretical generalization analyses in MTL with multi-graph dependent data. Finally, we apply the theoretical advancements to applications such as Macro-AUC Optimization, demonstrating the superiority of our theoretical results over previous work, which is also corroborated by experimental results.

View on arXiv
Comments on this paper