69

Minimax Optimal Submodular Optimization with Bandit Feedback

Neural Information Processing Systems (NeurIPS), 2023
Abstract

We consider maximizing a monotonic, submodular set function f:2[n][0,1]f: 2^{[n]} \rightarrow [0,1] under stochastic bandit feedback. Specifically, ff is unknown to the learner but at each time t=1,,Tt=1,\dots,T the learner chooses a set St[n]S_t \subset [n] with Stk|S_t| \leq k and receives reward f(St)+ηtf(S_t) + \eta_t where ηt\eta_t is mean-zero sub-Gaussian noise. The objective is to minimize the learner's regret over TT times with respect to (1e11-e^{-1})-approximation of maximum f(S)f(S_*) with S=k|S_*| = k, obtained through greedy maximization of ff. To date, the best regret bound in the literature scales as kn1/3T2/3k n^{1/3} T^{2/3}. And by trivially treating every set as a unique arm one deduces that (nk)T\sqrt{ {n \choose k} T } is also achievable. In this work, we establish the first minimax lower bound for this setting that scales like O(minik(in1/3T2/3+nkiT))\mathcal{O}(\min_{i \le k}(in^{1/3}T^{2/3} + \sqrt{n^{k-i}T})). Moreover, we propose an algorithm that is capable of matching the lower bound regret.

View on arXiv
Comments on this paper