Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1610.06940
Cited By
v1
v2
v3 (latest)
Safety Verification of Deep Neural Networks
21 October 2016
Xiaowei Huang
Marta Kwiatkowska
Sen Wang
Min Wu
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Safety Verification of Deep Neural Networks"
48 / 448 papers shown
Title
Verifiable Reinforcement Learning via Policy Extraction
Osbert Bastani
Yewen Pu
Armando Solar-Lezama
OffRL
151
339
0
22 May 2018
Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing
Jingyi Wang
Jun Sun
Peixin Zhang
Xinyu Wang
AAML
76
41
0
14 May 2018
Quantitative Projection Coverage for Testing ML-enabled Autonomous Systems
Chih-Hong Cheng
Chung-Hao Huang
Hirotoshi Yasuoka
49
41
0
11 May 2018
Reachability Analysis of Deep Neural Networks with Provable Guarantees
Wenjie Ruan
Xiaowei Huang
Marta Kwiatkowska
AAML
76
271
0
06 May 2018
Concolic Testing for Deep Neural Networks
Youcheng Sun
Min Wu
Wenjie Ruan
Xiaowei Huang
Marta Kwiatkowska
Daniel Kroening
99
335
0
30 Apr 2018
Formal Security Analysis of Neural Networks using Symbolic Intervals
Shiqi Wang
Kexin Pei
Justin Whitehouse
Junfeng Yang
Suman Jana
AAML
86
478
0
28 Apr 2018
Semantic Adversarial Deep Learning
Sanjit A. Seshia
S. Jha
T. Dreossi
AAML
SILM
77
91
0
19 Apr 2018
Simulation-based Adversarial Test Generation for Autonomous Vehicles with Machine Learning Components
Cumhur Erkan Tuncali
Georgios Fainekos
Hisahiro Ito
J. Kapinski
81
183
0
18 Apr 2018
Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the
L
0
L_0
L
0
Norm
Wenjie Ruan
Min Wu
Youcheng Sun
Xiaowei Huang
Daniel Kroening
Marta Kwiatkowska
AAML
65
39
0
16 Apr 2018
Reasoning about Safety of Learning-Enabled Components in Autonomous Cyber-physical Systems
Cumhur Erkan Tuncali
J. Kapinski
Hisahiro Ito
Jyotirmoy V. Deshmukh
73
42
0
11 Apr 2018
A Dual Approach to Scalable Verification of Deep Networks
Krishnamurthy Dvijotham
Dvijotham
Robert Stanforth
Sven Gowal
Timothy A. Mann
Pushmeet Kohli
70
399
0
17 Mar 2018
Testing Deep Neural Networks
Youcheng Sun
Xiaowei Huang
Daniel Kroening
James Sharp
Matthew Hill
Rob Ashmore
AAML
88
219
0
10 Mar 2018
Improved Explainability of Capsule Networks: Relevance Path by Agreement
Atefeh Shahroudnejad
Arash Mohammadi
Konstantinos N. Plataniotis
AAML
MedIm
50
62
0
27 Feb 2018
Constrained Image Generation Using Binarized Neural Networks with Decision Procedures
S. Korneev
Nina Narodytska
Luca Pulina
A. Tacchella
Nikolaj S. Bjørner
Shmuel Sagiv
MQ
48
13
0
24 Feb 2018
Certified Robustness to Adversarial Examples with Differential Privacy
Mathias Lécuyer
Vaggelis Atlidakis
Roxana Geambasu
Daniel J. Hsu
Suman Jana
SILM
AAML
131
940
0
09 Feb 2018
Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach
Tsui-Wei Weng
Huan Zhang
Pin-Yu Chen
Jinfeng Yi
D. Su
Yupeng Gao
Cho-Jui Hsieh
Luca Daniel
AAML
85
469
0
31 Jan 2018
Certified Defenses against Adversarial Examples
Aditi Raghunathan
Jacob Steinhardt
Percy Liang
AAML
130
969
0
29 Jan 2018
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
Naveed Akhtar
Ajmal Mian
AAML
146
1,873
0
02 Jan 2018
Reachable Set Computation and Safety Verification for Neural Networks with ReLU Activations
Weiming Xiang
Hoang-Dung Tran
Taylor T. Johnson
57
99
0
21 Dec 2017
Adversarial Examples: Attacks and Defenses for Deep Learning
Xiaoyong Yuan
Pan He
Qile Zhu
Xiaolin Li
SILM
AAML
149
1,628
0
19 Dec 2017
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
139
1,410
0
08 Dec 2017
How to Learn a Model Checker
Dung Phan
Radu Grosu
Nicola Paoletti
S. Smolka
Scott D. Stoller
16
0
0
05 Dec 2017
Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems
Kexin Pei
Linjie Zhu
Yinzhi Cao
Junfeng Yang
Carl Vondrick
Suman Jana
AAML
111
103
0
05 Dec 2017
AI Safety Gridworlds
Jan Leike
Miljan Martic
Victoria Krakovna
Pedro A. Ortega
Tom Everitt
Andrew Lefrancq
Laurent Orseau
Shane Legg
140
255
0
27 Nov 2017
How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models
Kathrin Grosse
David Pfaff
M. Smith
Michael Backes
AAML
82
9
0
17 Nov 2017
Provable defenses against adversarial examples via the convex outer adversarial polytope
Eric Wong
J. Zico Kolter
AAML
190
1,506
0
02 Nov 2017
Certifying Some Distributional Robustness with Principled Adversarial Training
Aman Sinha
Hongseok Namkoong
Riccardo Volpi
John C. Duchi
OOD
143
866
0
29 Oct 2017
Feature-Guided Black-Box Safety Testing of Deep Neural Networks
Matthew Wicker
Xiaowei Huang
Marta Kwiatkowska
AAML
77
235
0
21 Oct 2017
Verification of Binarized Neural Networks via Inter-Neuron Factoring
Chih-Hong Cheng
Georg Nührenberg
Chung-Hao Huang
Harald Ruess
AAML
85
21
0
09 Oct 2017
DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks
D. Gopinath
Guy Katz
C. Păsăreanu
Clark W. Barrett
AAML
141
87
0
02 Oct 2017
Provably Minimally-Distorted Adversarial Examples
Nicholas Carlini
Guy Katz
Clark W. Barrett
D. Dill
AAML
105
89
0
29 Sep 2017
Output Range Analysis for Deep Neural Networks
Souradeep Dutta
Susmit Jha
S. Sankaranarayanan
A. Tiwari
AAML
67
120
0
26 Sep 2017
Verifying Properties of Binarized Deep Neural Networks
Nina Narodytska
S. Kasiviswanathan
L. Ryzhyk
Shmuel Sagiv
T. Walsh
AAML
101
217
0
19 Sep 2017
An Analysis of ISO 26262: Using Machine Learning Safely in Automotive Software
Rick Salay
Rodrigo Queiroz
Krzysztof Czarnecki
55
133
0
07 Sep 2017
DeepTest: Automated Testing of Deep-Neural-Network-driven Autonomous Cars
Yuchi Tian
Kexin Pei
Suman Jana
Baishakhi Ray
AAML
97
1,365
0
28 Aug 2017
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Pin-Yu Chen
Huan Zhang
Yash Sharma
Jinfeng Yi
Cho-Jui Hsieh
AAML
115
1,894
0
14 Aug 2017
Systematic Testing of Convolutional Neural Networks for Autonomous Driving
T. Dreossi
Shromona Ghosh
Alberto L. Sangiovanni-Vincentelli
Sanjit A. Seshia
92
61
0
10 Aug 2017
Output Reachable Set Estimation and Verification for Multi-Layer Neural Networks
Weiming Xiang
Hoang-Dung Tran
Taylor T. Johnson
148
294
0
09 Aug 2017
Efficient Defenses Against Adversarial Attacks
Valentina Zantedeschi
Maria-Irina Nicolae
Ambrish Rawat
AAML
74
297
0
21 Jul 2017
An approach to reachability analysis for feed-forward ReLU neural networks
A. Lomuscio
Lalit Maganti
82
359
0
22 Jun 2017
DeepXplore: Automated Whitebox Testing of Deep Learning Systems
Kexin Pei
Yinzhi Cao
Junfeng Yang
Suman Jana
AAML
138
1,376
0
18 May 2017
Extending Defensive Distillation
Nicolas Papernot
Patrick McDaniel
AAML
88
119
0
15 May 2017
Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks
Rüdiger Ehlers
127
626
0
03 May 2017
Maximum Resilience of Artificial Neural Networks
Chih-Hong Cheng
Georg Nührenberg
Harald Ruess
AAML
145
284
0
28 Apr 2017
Compositional Falsification of Cyber-Physical Systems with Machine Learning Components
T. Dreossi
Alexandre Donzé
Sanjit A. Seshia
AAML
131
231
0
02 Mar 2017
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
334
1,877
0
03 Feb 2017
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
288
8,603
0
16 Aug 2016
Towards Verified Artificial Intelligence
Sanjit A. Seshia
Dorsa Sadigh
S. Shankar Sastry
126
203
0
27 Jun 2016
Previous
1
2
3
4
5
6
7
8
9