ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.02492
105
0

The Cost of Consistency: Submodular Maximization with Constant Recourse

3 December 2024
Paul Dutting
Federico Fusco
Silvio Lattanzi
A. Norouzi-Fard
O. Svensson
Morteza Zadimoghaddam
ArXiv (abs)PDFHTML
Abstract

In this work, we study online submodular maximization, and how the requirement of maintaining a stable solution impacts the approximation. In particular, we seek bounds on the best-possible approximation ratio that is attainable when the algorithm is allowed to make at most a constant number of updates per step. We show a tight information-theoretic bound of 23\tfrac{2}{3}32​ for general monotone submodular functions, and an improved (also tight) bound of 34\tfrac{3}{4}43​ for coverage functions. Since both these bounds are attained by non poly-time algorithms, we also give a poly-time randomized algorithm that achieves a 0.510.510.51-approximation. Combined with an information-theoretic hardness of 12\tfrac{1}{2}21​ for deterministic algorithms from prior work, our work thus shows a separation between deterministic and randomized algorithms, both information theoretically and for poly-time algorithms.

View on arXiv
Comments on this paper