ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.13370
18
12

Towards Deep Learning Models Resistant to Large Perturbations

30 March 2020
Amirreza Shaeiri
Rozhin Nobahari
M. Rohban
    OOD
    AAML
ArXivPDFHTML
Abstract

Adversarial robustness has proven to be a required property of machine learning algorithms. A key and often overlooked aspect of this problem is to try to make the adversarial noise magnitude as large as possible to enhance the benefits of the model robustness. We show that the well-established algorithm called "adversarial training" fails to train a deep neural network given a large, but reasonable, perturbation magnitude. In this paper, we propose a simple yet effective initialization of the network weights that makes learning on higher levels of noise possible. We next evaluate this idea rigorously on MNIST (ϵ\epsilonϵ up to ≈0.40\approx 0.40≈0.40) and CIFAR10 (ϵ\epsilonϵ up to ≈32/255\approx 32/255≈32/255) datasets assuming the ℓ∞\ell_{\infty}ℓ∞​ attack model. Additionally, in order to establish the limits of ϵ\epsilonϵ in which the learning is feasible, we study the optimal robust classifier assuming full access to the joint data and label distribution. Then, we provide some theoretical results on the adversarial accuracy for a simple multi-dimensional Bernoulli distribution, which yields some insights on the range of feasible perturbations for the MNIST dataset.

View on arXiv
Comments on this paper