ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.03251
  4. Cited By
Exploiting Vulnerabilities in Deep Neural Networks: Adversarial and
  Fault-Injection Attacks

Exploiting Vulnerabilities in Deep Neural Networks: Adversarial and Fault-Injection Attacks

5 May 2021
Faiq Khalid
Muhammad Abdullah Hanif
Muhammad Shafique
    AAML
    SILM
ArXivPDFHTML

Papers citing "Exploiting Vulnerabilities in Deep Neural Networks: Adversarial and Fault-Injection Attacks"

3 / 3 papers shown
Title
Testing the Depth of ChatGPT's Comprehension via Cross-Modal Tasks Based
  on ASCII-Art: GPT3.5's Abilities in Regard to Recognizing and Generating
  ASCII-Art Are Not Totally Lacking
Testing the Depth of ChatGPT's Comprehension via Cross-Modal Tasks Based on ASCII-Art: GPT3.5's Abilities in Regard to Recognizing and Generating ASCII-Art Are Not Totally Lacking
David Bayani
MLLM
33
5
0
28 Jul 2023
Robust Machine Learning Systems: Challenges, Current Trends,
  Perspectives, and the Road Ahead
Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead
Muhammad Shafique
Mahum Naseer
T. Theocharides
C. Kyrkou
O. Mutlu
Lois Orosa
Jungwook Choi
OOD
75
100
0
04 Jan 2021
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
287
5,835
0
08 Jul 2016
1