ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.03863
217
30
v1v2 (latest)

A single gradient step finds adversarial examples on random two-layers neural networks

Neural Information Processing Systems (NeurIPS), 2021
8 April 2021
Sébastien Bubeck
Yeshwanth Cherapanamjeri
Gauthier Gidel
Rémi Tachet des Combes
    MLT
ArXiv (abs)PDFHTML
Abstract

Daniely and Schacham recently showed that gradient descent finds adversarial examples on random undercomplete two-layers ReLU neural networks. The term "undercomplete" refers to the fact that their proof only holds when the number of neurons is a vanishing fraction of the ambient dimension. We extend their result to the overcomplete case, where the number of neurons is larger than the dimension (yet also subexponential in the dimension). In fact we prove that a single step of gradient descent suffices. We also show this result for any subexponential width random neural network with smooth activation function.

View on arXiv
Comments on this paper