On the Universal Near Optimality of Hedge in Combinatorial Settings
In this paper, we study the classical Hedge algorithm in combinatorial settings. In each round, the learner selects a vector from a set , observes a full loss vector , and incurs a loss . This setting captures several important problems, including extensive-form games, resource allocation, -sets, online multitask learning, and shortest-path problems on directed acyclic graphs (DAGs). It is well known that Hedge achieves a regret of after rounds of interaction. In this paper, we ask whether Hedge is optimal across all combinatorial settings. To that end, we show that for any , Hedge is near-optimal--specifically, up to a factor--by establishing a lower bound of that holds for any algorithm. We then identify a natural class of combinatorial sets--namely, -sets with --for which this lower bound is tight, and for which Hedge is provably suboptimal by a factor of exactly . At the same time, we show that Hedge is optimal for online multitask learning, a generalization of the classical -experts problem. Finally, we leverage the near-optimality of Hedge to establish the existence of a near-optimal regularizer for online shortest-path problems in DAGs--a setting that subsumes a broad range of combinatorial domains. Specifically, we show that the classical Online Mirror Descent (OMD) algorithm, when instantiated with the dilated entropy regularizer, is iterate-equivalent to Hedge, and therefore inherits its near-optimal regret guarantees for DAGs.
View on arXiv