82

Stronger Approximation Guarantees for Non-Monotone γ-Weakly DR-Submodular Maximization

Hareshkumar Jadav
Ranveer Singh
Vaneet Aggarwal
Main:14 Pages
1 Figures
Bibliography:3 Pages
1 Tables
Appendix:35 Pages
Abstract

Maximizing submodular objectives under constraints is a fundamental problem in machine learning and optimization. We study the maximization of a nonnegative, non-monotone γ\gamma-weakly DR-submodular function over a down-closed convex body. Our main result is an approximation algorithm whose guarantee depends smoothly on γ\gamma; in particular, when γ=1\gamma=1 (the DR-submodular case) our bound recovers the 0.4010.401 approximation factor, while for γ<1\gamma<1 the guarantee degrades gracefully and, it improves upon previously reported bounds for γ\gamma-weakly DR-submodular maximization under the same constraints. Our approach combines a Frank-Wolfe-guided continuous-greedy framework with a γ\gamma-aware double-greedy step, yielding a simple yet effective procedure for handling non-monotonicity. This results in state-of-the-art guarantees for non-monotone γ\gamma-weakly DR-submodular maximization over down-closed convex bodies.

View on arXiv
Comments on this paper