We study submodular optimization in adversarial context, applicable to machine learning problems such as feature selection using data susceptible to uncertainties and attacks. We focus on Stackelberg games between an attacker (or interdictor) and a defender where the attacker aims to minimize the defender's objective of maximizing a -submodular function. We allow uncertainties arising from the success of attacks and inherent data noise, and address challenges due to incomplete knowledge of the probability distribution of random parameters. Specifically, we introduce Distributionally Robust -Submodular Interdiction Problem (DRO -SIP) and Distributionally Risk-Receptive -Submodular Interdiction Problem (DRR -SIP) along with finitely convergent exact algorithms for solving them. When solving the DRO -SIP, the attacker optimizes their expected payoff with respect to the worst-case probability distribution within the ambiguity set, and thereby have robust attack strategies despite distributional ambiguity. In contrast, the DRR -SIP identifies attacker strategies with the best-case probability distribution, and identifies critical vulnerabilities for the defender. The optimal values derived from both DRO -SIP and DRR -SIP offer a confidence interval-like range for the expected value of the defender's objective function, capturing distributional ambiguity. We conduct computational experiments on instances of feature selection and sensor placement problems, using Wisconsin breast cancer data and synthetic data, respectively.
View on arXiv@article{park2025_2406.13023, title={ $k$-Submodular Interdiction Problems under Distributional Risk-Receptiveness and Robustness: Application to Machine Learning }, author={ Seonghun Park and Manish Bansal }, journal={arXiv preprint arXiv:2406.13023}, year={ 2025 } }