Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1806.02338
Cited By
v1
v2 (latest)
Towards Dependability Metrics for Neural Networks
6 June 2018
Chih-Hong Cheng
Georg Nührenberg
Chung-Hao Huang
Harald Ruess
Hirotoshi Yasuoka
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Towards Dependability Metrics for Neural Networks"
20 / 20 papers shown
Title
Investigating Calibration and Corruption Robustness of Post-hoc Pruned Perception CNNs: An Image Classification Benchmark Study
Pallavi Mitra
Gesina Schwalbe
Nadja Klein
AAML
74
1
0
31 May 2024
A Survey of Neural Network Robustness Assessment in Image Recognition
Jie Wang
Jun Ai
Minyan Lu
Haoran Su
Dan Yu
Yutao Zhang
Junda Zhu
Jingyu Liu
AAML
120
3
0
12 Apr 2024
Outline of an Independent Systematic Blackbox Test for ML-based Systems
H. Wiesbrock
Jürgen Grossmann
60
0
0
30 Jan 2024
PEM: Perception Error Model for Virtual Testing of Autonomous Vehicles
A. Piazzoni
Jim Cherian
Justin Dauwels
Lap-Pui Chau
55
12
0
23 Feb 2023
Backdoor Mitigation in Deep Neural Networks via Strategic Retraining
Akshay Dhonthi
E. M. Hahn
Vahid Hashemi
AAML
47
2
0
14 Dec 2022
Empowering the trustworthiness of ML-based critical systems through engineering activities
J. Mattioli
Agnès Delaborde
Souhaiel Khalfaoui
Freddy Lecue
H. Sohier
F. Jurie
36
2
0
30 Sep 2022
Exploring ML testing in practice -- Lessons learned from an interactive rapid review with Axis Communications
Qunying Song
Markus Borg
Emelie Engström
H. Ardö
Sergio Rico
45
10
0
30 Mar 2022
Safe AI -- How is this Possible?
Harald Ruess
Simon Burton
65
0
0
25 Jan 2022
A causal model of safety assurance for machine learning
Simon Burton
CML
81
5
0
14 Jan 2022
Enabling Verification of Deep Neural Networks in Perception Tasks Using Fuzzy Logic and Concept Embeddings
Gesina Schwalbe
Christian Wirth
Ute Schmid
AAML
41
7
0
03 Jan 2022
Network Generalization Prediction for Safety Critical Tasks in Novel Operating Domains
Molly O'Brien
Michael Medoff
Julia V. Bukowski
Gregory Hager
OOD
77
3
0
17 Aug 2021
Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
Sebastian Houben
Stephanie Abrecht
Maram Akila
Andreas Bär
Felix Brockherde
...
Serin Varghese
Michael Weber
Sebastian J. Wirkert
Tim Wirtz
Matthias Woehrle
AAML
126
58
0
29 Apr 2021
Application of the Neural Network Dependability Kit in Real-World Environments
Amit Sahu
Noelia Vállez
Rosana Rodríguez-Bobada
Mohamad Alhaddad
Omar Moured
G. Neugschwandtner
17
2
0
14 Dec 2020
Coverage Guided Testing for Recurrent Neural Networks
Wei Huang
Youcheng Sun
Xing-E. Zhao
James Sharp
Wenjie Ruan
Jie Meng
Xiaowei Huang
AAML
126
48
0
05 Nov 2019
A Systematic Mapping Study on Testing of Machine Learning Programs
S. Sherin
Muhammad Uzair Khan
Muhammad Zohaib Z. Iqbal
39
13
0
11 Jul 2019
Machine Learning Testing: Survey, Landscapes and Horizons
Jie M. Zhang
Mark Harman
Lei Ma
Yang Liu
VLM
AILaw
101
756
0
19 Jun 2019
Engineering problems in machine learning systems
Hiroshi Kuwajima
Hirotoshi Yasuoka
Toshihiro Nakae
52
3
0
01 Apr 2019
A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability
Xiaowei Huang
Daniel Kroening
Wenjie Ruan
Marta Kwiatkowska
Youcheng Sun
Emese Thamo
Min Wu
Xinping Yi
AAML
132
51
0
18 Dec 2018
Traceability of Deep Neural Networks
Vincent Aravantinos
Frederik Diehl
61
12
0
17 Dec 2018
nn-dependability-kit: Engineering Neural Networks for Safety-Critical Autonomous Driving Systems
Chih-Hong Cheng
Chung-Hao Huang
Georg Nührenberg
71
11
0
16 Nov 2018
1