We study the matroid semi-bandits problem, where at each round the learner plays a subset of arms from a feasible set, and the goal is to maximize the expected cumulative linear rewards. Existing algorithms have per-round time complexity at least , which becomes expensive when is large. To address this computational issue, we propose FasterCUCB whose sampling rule takes time sublinear in for common classes of matroids: for uniform matroids, partition matroids, and graphical matroids, and for transversal matroids. Here, is the maximum number of elements in any feasible subset of arms, and is the horizon. Our technique is based on dynamic maintenance of an approximate maximum-weight basis over inner-product weights. Although the introduction of an approximate maximum-weight basis presents a challenge in regret analysis, we can still guarantee an upper bound on regret as tight as CUCB in the sense that it matches the gap-dependent lower bound by Kveton et al. (2014a) asymptotically.
View on arXiv