ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.00703
63
20
v1v2v3 (latest)

Stochastic Continuous Submodular Maximization: Boosting via Non-oblivious Function

3 January 2022
Qixin Zhang
Zengde Deng
Zaiyi Chen
Haoyuan Hu
Yu Yang
ArXiv (abs)PDFHTML
Abstract

In this paper, we revisit Stochastic Continuous Submodular Maximization in both offline and online settings, which can benefit wide applications in machine learning and operations research areas. We present a boosting framework covering gradient ascent and online gradient ascent. The fundamental ingredient of our methods is a novel non-oblivious function FFF derived from a factor-revealing optimization problem, whose any stationary point provides a (1−e−γ)(1-e^{-\gamma})(1−e−γ)-approximation to the global maximum of the γ\gammaγ-weakly DR-submodular objective function f∈CL1,1(X)f\in C^{1,1}_L(\mathcal{X})f∈CL1,1​(X). Under the offline scenario, we propose a boosting gradient ascent method achieving (1−e−γ−ϵ2)(1-e^{-\gamma}-\epsilon^{2})(1−e−γ−ϵ2)-approximation after O(1/ϵ2)O(1/\epsilon^2)O(1/ϵ2) iterations, which improves the (γ21+γ2)(\frac{\gamma^2}{1+\gamma^2})(1+γ2γ2​) approximation ratio of the classical gradient ascent algorithm. In the online setting, for the first time we consider the adversarial delays for stochastic gradient feedback, under which we propose a boosting online gradient algorithm with the same non-oblivious function FFF. Meanwhile, we verify that this boosting online algorithm achieves a regret of O(D)O(\sqrt{D})O(D​) against a (1−e−γ)(1-e^{-\gamma})(1−e−γ)-approximation to the best feasible solution in hindsight, where DDD is the sum of delays of gradient feedback. To the best of our knowledge, this is the first result to obtain O(T)O(\sqrt{T})O(T​) regret against a (1−e−γ)(1-e^{-\gamma})(1−e−γ)-approximation with O(1)O(1)O(1) gradient inquiry at each time step, when no delay exists, i.e., D=TD=TD=T. Finally, numerical experiments demonstrate the effectiveness of our boosting methods.

View on arXiv
Comments on this paper