Towards Optimal Algorithms for Prediction with Expert Advice
We study the classical problem of prediction with expert advice in the adversarial setting with a geometric stopping time. In 1965, Cover gave the optimal algorithm for the case of experts. In this paper, we design the optimal algorithm, adversary and regret for the case of experts. Further, we show that the optimal algorithm for and experts is a probability matching algorithm (analogous to Thompson sampling) against a particular randomized adversary. Remarkably, it turns out that this algorithm is not only optimal against this adversary, but also minimax optimal against all possible adversaries. We establish a constant factor separation between the regrets achieved by the optimal algorithm and the widely used multiplicative weights algorithm. Along the way, we improve the regret lower bounds for the multiplicative weights algorithm for an arbitrary number of experts and show that this is tight for experts. A novel aspect of our analysis is that we develop upper and lower bounds simultaneously, analogous to the primal-dual method. The analysis of the optimal adversary relies on delicate random walk estimates. We further use this connection to develop an improved regret bound for the case of experts, and provide a general framework for designing the optimal algorithm for an arbitrary number of experts.
View on arXiv