150
v1v2v3 (latest)

Sharper Risk Bound for Multi-Task Learning with Multi-Graph Dependent Data

Main:9 Pages
1 Figures
Bibliography:4 Pages
6 Tables
Appendix:26 Pages
Abstract

In multi-task learning (MTL) with each task involving graph-dependent data, existing generalization analyses yield a \emph{sub-optimal} risk bound of O(1n)O(\frac{1}{\sqrt{n}}), where nn is the number of training samples of each task. However, to improve the risk bound is technically challenging, which is attributed to the lack of a foundational sharper concentration inequality for multi-graph dependent random variables. To fill up this gap, this paper proposes a new Bennett-type inequality, enabling the derivation of a sharper risk bound of O(lognn)O(\frac{\log n}{n}). Technically, building on the proposed Bennett-type inequality, we propose a new Talagrand-type inequality for the empirical process, and further develop a new analytical framework of the local fractional Rademacher complexity to enhance generalization analyses in MTL with multi-graph dependent data. Finally, we apply the theoretical advancements to applications such as Macro-AUC optimization, illustrating the superiority of our theoretical results over prior work, which is also verified by experimental results.

View on arXiv
Comments on this paper