In recent years several adversarial attacks and defenses have been proposed. Often seemingly robust models turn out to be non-robust when more sophisticated attacks are used. One way out of this dilemma are provable robustness guarantees. While provably robust models for specific -perturbation models have been developed, they are still vulnerable to other -perturbations. We propose a new regularization scheme, MMR-Universal, for ReLU networks which enforces robustness wrt - and -perturbations and show how that leads to provably robust models wrt any -norm for .
View on arXiv