ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.01208
32
54

Mind the box: l1l_1l1​-APGD for sparse adversarial attacks on image classifiers

1 March 2021
Francesco Croce
Matthias Hein
    AAML
ArXivPDFHTML
Abstract

We show that when taking into account also the image domain [0,1]d[0,1]^d[0,1]d, established l1l_1l1​-projected gradient descent (PGD) attacks are suboptimal as they do not consider that the effective threat model is the intersection of the l1l_1l1​-ball and [0,1]d[0,1]^d[0,1]d. We study the expected sparsity of the steepest descent step for this effective threat model and show that the exact projection onto this set is computationally feasible and yields better performance. Moreover, we propose an adaptive form of PGD which is highly effective even with a small budget of iterations. Our resulting l1l_1l1​-APGD is a strong white-box attack showing that prior works overestimated their l1l_1l1​-robustness. Using l1l_1l1​-APGD for adversarial training we get a robust classifier with SOTA l1l_1l1​-robustness. Finally, we combine l1l_1l1​-APGD and an adaptation of the Square Attack to l1l_1l1​ into l1l_1l1​-AutoAttack, an ensemble of attacks which reliably assesses adversarial robustness for the threat model of l1l_1l1​-ball intersected with [0,1]d[0,1]^d[0,1]d.

View on arXiv
Comments on this paper