73
20

Continuous Submodular Maximization: Boosting via Non-oblivious Function

Abstract

In this paper, we revisit the constrained and stochastic continuous submodular maximization in both offline and online settings. For each γ\gamma-weakly DR-submodular function ff, we use the factor-revealing optimization equation to derive an optimal auxiliary function FF, whose stationary points provide a (1eγ)(1-e^{-\gamma})-approximation to the global maximum value (denoted as OPTOPT) of problem maxxCf(x)\max_{\boldsymbol{x}\in\mathcal{C}}f(\boldsymbol{x}). Naturally, the projected (mirror) gradient ascent relied on this non-oblivious function achieves (1eγϵ2)OPTϵ(1-e^{-\gamma}-\epsilon^{2})OPT-\epsilon after O(1/ϵ2)O(1/\epsilon^{2}) iterations, beating the traditional (γ21+γ2)(\frac{\gamma^{2}}{1+\gamma^{2}})-approximation gradient ascent \citep{hassani2017gradient} for submodular maximization. Similarly, based on FF, the classical Frank-Wolfe algorithm equipped with variance reduction technique \citep{mokhtari2018conditional} also returns a solution with objective value larger than (1eγϵ2)OPTϵ(1-e^{-\gamma}-\epsilon^{2})OPT-\epsilon after O(1/ϵ3)O(1/\epsilon^{3}) iterations. In the online setting, we first consider the adversarial delays for stochastic gradient feedback, under which we propose a boosting online gradient algorithm with the same non-oblivious search, achieving a regret of D\sqrt{D} (where DD is the sum of delays of gradient feedback) against a (1eγ)(1-e^{-\gamma})-approximation to the best feasible solution in hindsight. Finally, extensive numerical experiments demonstrate the efficiency of our boosting methods.

View on arXiv
Comments on this paper