282

Optimal Prediction Using Expert Advice and Randomized Littlestone Dimension

Annual Conference Computational Learning Theory (COLT), 2023
Main:40 Pages
5 Figures
Bibliography:3 Pages
1 Tables
Abstract

A classical result in online learning characterizes the optimal mistake bound achievable by deterministic learners using the Littlestone dimension (Littlestone '88). We prove an analogous result for randomized learners: we show that the optimal expected mistake bound in learning a class H\mathcal{H} equals its randomized Littlestone dimension, which is the largest dd for which there exists a tree shattered by H\mathcal{H} whose average depth is 2d2d. We further study optimal mistake bounds in the agnostic case, as a function of the number of mistakes made by the best function in H\mathcal{H}, denoted by kk. We show that the optimal randomized mistake bound for learning a class with Littlestone dimension dd is k+Θ(kd+d)k + \Theta (\sqrt{k d} + d ). This also implies an optimal deterministic mistake bound of 2k+O(kd+d)2k + O (\sqrt{k d} + d ), thus resolving an open question which was studied by Auer and Long ['99]. As an application of our theory, we revisit the classical problem of prediction using expert advice: about 30 years ago Cesa-Bianchi, Freund, Haussler, Helmbold, Schapire and Warmuth studied prediction using expert advice, provided that the best among the nn experts makes at most kk mistakes, and asked what are the optimal mistake bounds. Cesa-Bianchi, Freund, Helmbold, and Warmuth ['93, '96] provided a nearly optimal bound for deterministic learners, and left the randomized case as an open problem. We resolve this question by providing an optimal learning rule in the randomized case, and showing that its expected mistake bound equals half of the deterministic bound, up to negligible additive terms. This improves upon previous works by Cesa-Bianchi, Freund, Haussler, Helmbold, Schapire and Warmuth ['93, '97], by Abernethy, Langford, and Warmuth ['06], and by Br\^anzei and Peres ['19], which handled the regimes klognk \ll \log n or klognk \gg \log n.

View on arXiv
Comments on this paper