ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.11726
14
5

Continuous Submodular Maximization: Beyond DR-Submodularity

21 June 2020
Moran Feldman
Amin Karbasi
ArXivPDFHTML
Abstract

In this paper, we propose the first continuous optimization algorithms that achieve a constant factor approximation guarantee for the problem of monotone continuous submodular maximization subject to a linear constraint. We first prove that a simple variant of the vanilla coordinate ascent, called Coordinate-Ascent+, achieves a (e−12e−1−ε)(\frac{e-1}{2e-1}-\varepsilon)(2e−1e−1​−ε)-approximation guarantee while performing O(n/ε)O(n/\varepsilon)O(n/ε) iterations, where the computational complexity of each iteration is roughly O(n/ε+nlog⁡n)O(n/\sqrt{\varepsilon}+n\log n)O(n/ε​+nlogn) (here, nnn denotes the dimension of the optimization problem). We then propose Coordinate-Ascent++, that achieves the tight (1−1/e−ε)(1-1/e-\varepsilon)(1−1/e−ε)-approximation guarantee while performing the same number of iterations, but at a higher computational complexity of roughly O(n3/ε2.5+n3log⁡n/ε2)O(n^3/\varepsilon^{2.5} + n^3 \log n / \varepsilon^2)O(n3/ε2.5+n3logn/ε2) per iteration. However, the computation of each round of Coordinate-Ascent++ can be easily parallelized so that the computational cost per machine scales as O(n/ε+nlog⁡n)O(n/\sqrt{\varepsilon}+n\log n)O(n/ε​+nlogn).

View on arXiv
Comments on this paper