Stronger Approximation Guarantees for Non-Monotone γ-Weakly DR-Submodular Maximization
Maximizing submodular objectives under constraints is a fundamental problem in machine learning and optimization. We study the maximization of a nonnegative, non-monotone -weakly DR-submodular function over a down-closed convex body. Our main result is an approximation algorithm whose guarantee depends smoothly on ; in particular, when (the DR-submodular case) our bound recovers the approximation factor, while for the guarantee degrades gracefully and, it improves upon previously reported bounds for -weakly DR-submodular maximization under the same constraints. Our approach combines a Frank-Wolfe-guided continuous-greedy framework with a -aware double-greedy step, yielding a simple yet effective procedure for handling non-monotonicity. This results in state-of-the-art guarantees for non-monotone -weakly DR-submodular maximization over down-closed convex bodies.
View on arXiv