ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.00783
13
1

Adversarial Examples Exist in Two-Layer ReLU Networks for Low Dimensional Linear Subspaces

1 March 2023
Odelia Melamed
Gilad Yehudai
Gal Vardi
    GAN
ArXivPDFHTML
Abstract

Despite a great deal of research, it is still not well-understood why trained neural networks are highly vulnerable to adversarial examples. In this work we focus on two-layer neural networks trained using data which lie on a low dimensional linear subspace. We show that standard gradient methods lead to non-robust neural networks, namely, networks which have large gradients in directions orthogonal to the data subspace, and are susceptible to small adversarial L2L_2L2​-perturbations in these directions. Moreover, we show that decreasing the initialization scale of the training algorithm, or adding L2L_2L2​ regularization, can make the trained network more robust to adversarial perturbations orthogonal to the data.

View on arXiv
Comments on this paper