ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.11213
68
75
v1v2 (latest)

Provable robustness against all adversarial lpl_plp​-perturbations for p≥1p\geq 1p≥1

27 May 2019
Francesco Croce
Matthias Hein
    OOD
ArXiv (abs)PDFHTML
Abstract

In recent years several adversarial attacks and defenses have been proposed. Often seemingly robust models turn out to be non-robust when more sophisticated attacks are used. One way out of this dilemma are provable robustness guarantees. While provably robust models for specific lpl_plp​-perturbation models have been developed, we show that they do not come with any guarantee against other lql_qlq​-perturbations. We propose a new regularization scheme, MMR-Universal, for ReLU networks which enforces robustness wrt l1l_1l1​- and l∞l_\inftyl∞​-perturbations and show how that leads to the first provably robust models wrt any lpl_plp​-norm for p≥1p\geq 1p≥1.

View on arXiv
Comments on this paper