ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.16200
26
1

Towards unlocking the mystery of adversarial fragility of neural networks

23 June 2024
Jingchao Gao
Raghu Mudumbai
Xiaodong Wu
Jirong Yi
Catherine Xu
Hui Xie
Weiyu Xu
ArXivPDFHTML
Abstract

In this paper, we study the adversarial robustness of deep neural networks for classification tasks. We look at the smallest magnitude of possible additive perturbations that can change the output of a classification algorithm. We provide a matrix-theoretic explanation of the adversarial fragility of deep neural network for classification. In particular, our theoretical results show that neural network's adversarial robustness can degrade as the input dimension ddd increases. Analytically we show that neural networks' adversarial robustness can be only 1/d1/\sqrt{d}1/d​ of the best possible adversarial robustness. Our matrix-theoretic explanation is consistent with an earlier information-theoretic feature-compression-based explanation for the adversarial fragility of neural networks.

View on arXiv
Comments on this paper