ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.06455
8
7

On Norm-Agnostic Robustness of Adversarial Training

15 May 2019
Bai Li
Changyou Chen
Wenlin Wang
Lawrence Carin
    OOD
    SILM
ArXivPDFHTML
Abstract

Adversarial examples are carefully perturbed in-puts for fooling machine learning models. A well-acknowledged defense method against such examples is adversarial training, where adversarial examples are injected into training data to increase robustness. In this paper, we propose a new attack to unveil an undesired property of the state-of-the-art adversarial training, that is it fails to obtain robustness against perturbations in ℓ2\ell_2ℓ2​ and ℓ∞\ell_\inftyℓ∞​ norms simultaneously. We discuss a possible solution to this issue and its limitations as well.

View on arXiv
Comments on this paper