ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.02578
20
0
v1v2v3v4 (latest)

Differentiable Greedy Submodular Maximization with Guarantees and Gradient Estimators

6 May 2020
Shinsaku Sakaue
ArXiv (abs)PDFHTML
Abstract

We consider making outputs of the greedy algorithm for monotone submodular function maximization differentiable w.r.t. parameters of objective functions. Due to the non-continuous behavior of the algorithm, we must use some smoothing methods. Our contribution is a theoretically guaranteed and widely applicable smoothing framework based on randomization. We prove that our smoothed greedy algorithm almost recovers original approximation guarantees in expectation for the cases of cardinality and κ\kappaκ-extensible system constrains. We also show that unbiased gradient estimators of any expected output-dependent quantities can be efficiently obtained by sampling outputs. We confirm the utility and effectiveness of our framework by applying it to sensitivity analysis of the greedy algorithm and decision-focused learning of parameterized submodular models.

View on arXiv
Comments on this paper