ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.08681
44
5

Communication-Efficient Decentralized Online Continuous DR-Submodular Maximization

18 August 2022
Qixin Zhang
Zengde Deng
Xiangru Jian
Zaiyi Chen
Haoyuan Hu
Yu Yang
ArXiv (abs)PDFHTML
Abstract

Maximizing a monotone submodular function is a fundamental task in machine learning, economics, and statistics. In this paper, we present two communication-efficient decentralized online algorithms for the monotone continuous DR-submodular maximization problem, both of which reduce the number of per-function gradient evaluations and per-round communication complexity from T3/2T^{3/2}T3/2 to 111. The first one, One-shot Decentralized Meta-Frank-Wolfe (Mono-DMFW), achieves a (1−1/e)(1-1/e)(1−1/e)-regret bound of O(T4/5)O(T^{4/5})O(T4/5). As far as we know, this is the first one-shot and projection-free decentralized online algorithm for monotone continuous DR-submodular maximization. Next, inspired by the non-oblivious boosting function \citep{zhang2022boosting}, we propose the Decentralized Online Boosting Gradient Ascent (DOBGA) algorithm, which attains a (1−1/e)(1-1/e)(1−1/e)-regret of O(T)O(\sqrt{T})O(T​). To the best of our knowledge, this is the first result to obtain the optimal O(T)O(\sqrt{T})O(T​) against a (1−1/e)(1-1/e)(1−1/e)-approximation with only one gradient inquiry for each local objective function per step. Finally, various experimental results confirm the effectiveness of the proposed methods.

View on arXiv
Comments on this paper