ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.05475
87
2

Adversarial Robustness: What fools you makes you stronger

10 February 2021
Grzegorz Gluch
R. Urbanke
    AAML
ArXivPDFHTML
Abstract

We prove an exponential separation for the sample complexity between the standard PAC-learning model and a version of the Equivalence-Query-learning model. We then show that this separation has interesting implications for adversarial robustness. We explore a vision of designing an adaptive defense that in the presence of an attacker computes a model that is provably robust. In particular, we show how to realize this vision in a simplified setting. In order to do so, we introduce a notion of a strong adversary: he is not limited by the type of perturbations he can apply but when presented with a classifier can repetitively generate different adversarial examples. We explain why this notion is interesting to study and use it to prove the following. There exists an efficient adversarial-learning-like scheme such that for every strong adversary A\mathbf{A}A it outputs a classifier that (a) cannot be strongly attacked by A\mathbf{A}A, or (b) has error at most ϵ\epsilonϵ. In both cases our scheme uses exponentially (in ϵ\epsilonϵ) fewer samples than what the PAC bound requires.

View on arXiv
Comments on this paper