ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.14326
13
0

Computational Asymmetries in Robust Classification

25 June 2023
Samuele Marro
M. Lombardi
    AAML
ArXivPDFHTML
Abstract

In the context of adversarial robustness, we make three strongly related contributions. First, we prove that while attacking ReLU classifiers is NP\mathit{NP}NP-hard, ensuring their robustness at training time is ΣP2\Sigma^2_PΣP2​-hard (even on a single example). This asymmetry provides a rationale for the fact that robust classifications approaches are frequently fooled in the literature. Second, we show that inference-time robustness certificates are not affected by this asymmetry, by introducing a proof-of-concept approach named Counter-Attack (CA). Indeed, CA displays a reversed asymmetry: running the defense is NP\mathit{NP}NP-hard, while attacking it is Σ2P\Sigma_2^PΣ2P​-hard. Finally, motivated by our previous result, we argue that adversarial attacks can be used in the context of robustness certification, and provide an empirical evaluation of their effectiveness. As a byproduct of this process, we also release UG100, a benchmark dataset for adversarial attacks.

View on arXiv
Comments on this paper