ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.06114
  4. Cited By
Towards the Quantification of Safety Risks in Deep Neural Networks

Towards the Quantification of Safety Risks in Deep Neural Networks

13 September 2020
Peipei Xu
Wenjie Ruan
Xiaowei Huang
ArXiv (abs)PDFHTML

Papers citing "Towards the Quantification of Safety Risks in Deep Neural Networks"

4 / 4 papers shown
Title
Sparse Adversarial Video Attacks with Spatial Transformations
Sparse Adversarial Video Attacks with Spatial Transformations
Ronghui Mu
Wenjie Ruan
Leandro Soriano Marcolino
Q. Ni
AAML
90
19
0
10 Nov 2021
Adversarial Robustness of Deep Learning: Theory, Algorithms, and
  Applications
Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications
Wenjie Ruan
Xinping Yi
Xiaowei Huang
AAMLOOD
52
17
0
24 Aug 2021
Safety Metrics for Semantic Segmentation in Autonomous Driving
Safety Metrics for Semantic Segmentation in Autonomous Driving
Chih-Hong Cheng
Alois C. Knoll
Hsuan-Cheng Liao
69
9
0
21 May 2021
Generalizing Universal Adversarial Attacks Beyond Additive Perturbations
Generalizing Universal Adversarial Attacks Beyond Additive Perturbations
Yanghao Zhang
Wenjie Ruan
Fu Lee Wang
Xiaowei Huang
AAML
87
24
0
15 Oct 2020
1