ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.13232
18
0

Relationship between Uncertainty in DNNs and Adversarial Attacks

20 September 2024
Abigail Adeniran
Adewale Adeyemo
Adewale Adeyemo
    AAML
ArXivPDFHTML
Abstract

Deep Neural Networks (DNNs) have achieved state of the art results and even outperformed human accuracy in many challenging tasks, leading to DNNs adoption in a variety of fields including natural language processing, pattern recognition, prediction, and control optimization. However, DNNs are accompanied by uncertainty about their results, causing them to predict an outcome that is either incorrect or outside of a certain level of confidence. These uncertainties stem from model or data constraints, which could be exacerbated by adversarial attacks. Adversarial attacks aim to provide perturbed input to DNNs, causing the DNN to make incorrect predictions or increase model uncertainty. In this review, we explore the relationship between DNN uncertainty and adversarial attacks, emphasizing how adversarial attacks might raise DNN uncertainty.

View on arXiv
@article{ogonna2025_2409.13232,
  title={ Relationship between Uncertainty in DNNs and Adversarial Attacks },
  author={ Mabel Ogonna and Abigail Adeniran and Adewale Adeyemo },
  journal={arXiv preprint arXiv:2409.13232},
  year={ 2025 }
}
Comments on this paper