Continuous Submodular Maximization: Boosting via Non-oblivious Function

In this paper, we revisit the constrained and stochastic continuous submodular maximization in both offline and online settings. For each -weakly DR-submodular function , we use the factor-revealing optimization equation to derive an optimal auxiliary function , whose stationary points provide a -approximation to the global maximum value (denoted as ) of problem . Naturally, the projected (mirror) gradient ascent relied on this non-oblivious function achieves after iterations, beating the traditional -approximation gradient ascent \citep{hassani2017gradient} for submodular maximization. Similarly, based on , the classical Frank-Wolfe algorithm equipped with variance reduction technique \citep{mokhtari2018conditional} also returns a solution with objective value larger than after iterations. In the online setting, we first consider the adversarial delays for stochastic gradient feedback, under which we propose a boosting online gradient algorithm with the same non-oblivious search, achieving a regret of (where is the sum of delays of gradient feedback) against a -approximation to the best feasible solution in hindsight. Finally, extensive numerical experiments demonstrate the efficiency of our boosting methods.
View on arXiv