ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.01034
58
49

Transfer of Adversarial Robustness Between Perturbation Types

3 May 2019
Daniel Kang
Yi Sun
Tom B. Brown
Dan Hendrycks
Jacob Steinhardt
    AAML
ArXiv (abs)PDFHTML
Abstract

We study the transfer of adversarial robustness of deep neural networks between different perturbation types. While most work on adversarial examples has focused on L∞L_\inftyL∞​ and L2L_2L2​-bounded perturbations, these do not capture all types of perturbations available to an adversary. The present work evaluates 32 attacks of 5 different types against models adversarially trained on a 100-class subset of ImageNet. Our empirical results suggest that evaluating on a wide range of perturbation sizes is necessary to understand whether adversarial robustness transfers between perturbation types. We further demonstrate that robustness against one perturbation type may not always imply and may sometimes hurt robustness against other perturbation types. In light of these results, we recommend evaluation of adversarial defenses take place on a diverse range of perturbation types and sizes.

View on arXiv
Comments on this paper