507

Nearly Minimax-Optimal Regret for Linearly Parameterized Bandits

Abstract

We study the linear contextual bandit problem with finite action sets. When the problem dimension is dd, the time horizon is TT, and there are n2d/2n \leq 2^{d/2} candidate actions per time period, we (1) show that the minimax expected regret is Ω(dTlogTlogn)\Omega(\sqrt{dT \log T \log n}) for every algorithm, and (2) introduce a Variable-Confidence-Level (VCL) SupLinUCB algorithm whose regret matches the lower bound up to iterated logarithmic factors. Our algorithmic result saves two logT\sqrt{\log T} factors from previous analysis, and our information-theoretical lower bound also improves previous results by one logT\sqrt{\log T} factor, revealing a regret scaling quite different from classical multi-armed bandits in which no logarithmic TT term is present in minimax regret. Our proof techniques include variable confidence levels and a careful analysis of layer sizes of SupLinUCB on the upper bound side, and delicately constructed adversarial sequences showing the tightness of elliptical potential lemmas on the lower bound side.

View on arXiv
Comments on this paper