Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1805.12514
Cited By
v1
v2 (latest)
Scaling provable adversarial defenses
31 May 2018
Eric Wong
Frank R. Schmidt
J. H. Metzen
J. Zico Kolter
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Github (387★)
Papers citing
"Scaling provable adversarial defenses"
50 / 273 papers shown
Title
A Fast Saddle-Point Dynamical System Approach to Robust Deep Learning
Yasaman Esfandiari
Aditya Balu
K. Ebrahimi
Umesh Vaidya
N. Elia
Soumik Sarkar
OOD
47
3
0
18 Oct 2019
Universal Approximation with Certified Networks
Maximilian Baader
M. Mirman
Martin Vechev
71
22
0
30 Sep 2019
Towards Robust Direct Perception Networks for Automated Driving
Chih-Hong Cheng
16
1
0
30 Sep 2019
Defending Against Physically Realizable Attacks on Image Classification
Tong Wu
Liang Tong
Yevgeniy Vorobeychik
AAML
84
126
0
20 Sep 2019
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
Han Xu
Yao Ma
Haochen Liu
Debayan Deb
Hui Liu
Jiliang Tang
Anil K. Jain
AAML
79
680
0
17 Sep 2019
Adversarial Robustness Against the Union of Multiple Perturbation Models
Pratyush Maini
Eric Wong
J. Zico Kolter
OOD
AAML
63
151
0
09 Sep 2019
Understanding Adversarial Robustness: The Trade-off between Minimum and Average Margin
Kaiwen Wu
Yaoliang Yu
AAML
45
8
0
26 Jul 2019
Connecting Lyapunov Control Theory to Adversarial Attacks
Arash Rahnama
A. Nguyen
Edward Raff
AAML
21
6
0
17 Jul 2019
ART: Abstraction Refinement-Guided Training for Provably Correct Neural Networks
Xuankang Lin
He Zhu
R. Samanta
Suresh Jagannathan
AAML
92
29
0
17 Jul 2019
Towards Robust, Locally Linear Deep Networks
Guang-He Lee
David Alvarez-Melis
Tommi Jaakkola
ODL
135
48
0
07 Jul 2019
Interpolated Adversarial Training: Achieving Robust Neural Networks without Sacrificing Too Much Accuracy
Alex Lamb
Vikas Verma
Kenji Kawaguchi
Alexander Matyasko
Savya Khosla
Arno Solin
Yoshua Bengio
AAML
70
99
0
16 Jun 2019
Towards Stable and Efficient Training of Verifiably Robust Neural Networks
Huan Zhang
Hongge Chen
Chaowei Xiao
Sven Gowal
Robert Stanforth
Yue Liu
Duane S. Boning
Cho-Jui Hsieh
AAML
107
351
0
14 Jun 2019
Towards Compact and Robust Deep Neural Networks
Vikash Sehwag
Shiqi Wang
Prateek Mittal
Suman Jana
AAML
82
40
0
14 Jun 2019
Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers
Guang-He Lee
Yang Yuan
Shiyu Chang
Tommi Jaakkola
AAML
73
127
0
12 Jun 2019
Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks
Mahyar Fazlyab
Alexander Robey
Hamed Hassani
M. Morari
George J. Pappas
142
462
0
12 Jun 2019
Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective
Lu Wang
Xuanqing Liu
Jinfeng Yi
Zhi Zhou
Cho-Jui Hsieh
AAML
83
22
0
10 Jun 2019
Robustness Verification of Tree-based Models
Hongge Chen
Huan Zhang
Si Si
Yang Li
Duane S. Boning
Cho-Jui Hsieh
AAML
103
77
0
10 Jun 2019
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
Hadi Salman
Greg Yang
Jungshian Li
Pengchuan Zhang
Huan Zhang
Ilya P. Razenshteyn
Sébastien Bubeck
AAML
131
552
0
09 Jun 2019
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks
Maksym Andriushchenko
Matthias Hein
84
62
0
08 Jun 2019
Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness
Walt Woods
Jack H Chen
C. Teuscher
AAML
66
46
0
07 Jun 2019
Enhancing Gradient-based Attacks with Symbolic Intervals
Shiqi Wang
Yizheng Chen
Ahmed Abdou
Suman Jana
AAML
58
15
0
05 Jun 2019
Correctness Verification of Neural Networks
Yichen Yang
Martin Rinard
AAML
67
12
0
03 Jun 2019
Unlabeled Data Improves Adversarial Robustness
Y. Carmon
Aditi Raghunathan
Ludwig Schmidt
Percy Liang
John C. Duchi
130
754
0
31 May 2019
Are Labels Required for Improving Adversarial Robustness?
J. Uesato
Jean-Baptiste Alayrac
Po-Sen Huang
Robert Stanforth
Alhussein Fawzi
Pushmeet Kohli
AAML
95
335
0
31 May 2019
Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness
Saeed Mahloujifar
Xiao Zhang
Mohammad Mahmoody
David Evans
64
22
0
29 May 2019
Controlling Neural Level Sets
Matan Atzmon
Niv Haim
Lior Yariv
Ofer Israelov
Haggai Maron
Y. Lipman
AI4CE
54
121
0
28 May 2019
Adversarially Robust Learning Could Leverage Computational Hardness
Sanjam Garg
S. Jha
Saeed Mahloujifar
Mohammad Mahmoody
AAML
163
24
0
28 May 2019
Scaleable input gradient regularization for adversarial robustness
Chris Finlay
Adam M. Oberman
AAML
101
79
0
27 May 2019
Provable robustness against all adversarial
l
p
l_p
l
p
-perturbations for
p
≥
1
p\geq 1
p
≥
1
Francesco Croce
Matthias Hein
OOD
78
75
0
27 May 2019
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Liwei Song
Reza Shokri
Prateek Mittal
SILM
MIACV
AAML
88
248
0
24 May 2019
Testing DNN Image Classifiers for Confusion & Bias Errors
Yuchi Tian
Ziyuan Zhong
Vicente Ordonez
Gail E. Kaiser
Baishakhi Ray
153
53
0
20 May 2019
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
32
5
0
19 May 2019
Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples
Vikash Sehwag
A. Bhagoji
Liwei Song
Chawin Sitawarin
Daniel Cullina
M. Chiang
Prateek Mittal
OODD
77
26
0
05 May 2019
Adversarial Training for Free!
Ali Shafahi
Mahyar Najibi
Amin Ghiasi
Zheng Xu
John P. Dickerson
Christoph Studer
L. Davis
Gavin Taylor
Tom Goldstein
AAML
139
1,254
0
29 Apr 2019
Universal Lipschitz Approximation in Bounded Depth Neural Networks
Jérémy E. Cohen
Todd P. Huster
Ravid Cohen
AAML
65
23
0
09 Apr 2019
On Training Robust PDF Malware Classifiers
Yizheng Chen
Shiqi Wang
Dongdong She
Suman Jana
AAML
99
69
0
06 Apr 2019
A Provable Defense for Deep Residual Networks
M. Mirman
Gagandeep Singh
Martin Vechev
82
26
0
29 Mar 2019
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks
Francesco Croce
Jonas Rauber
Matthias Hein
AAML
60
31
0
27 Mar 2019
Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness
J. Jacobsen
Jens Behrmann
Nicholas Carlini
Florian Tramèr
Nicolas Papernot
AAML
79
46
0
25 Mar 2019
Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes
Matt Jordan
Justin Lewis
A. Dimakis
AAML
79
57
0
20 Mar 2019
On Certifying Non-uniform Bound against Adversarial Attacks
Chen Liu
Ryota Tomioka
Volkan Cevher
AAML
79
19
0
15 Mar 2019
Robust Decision Trees Against Adversarial Examples
Hongge Chen
Huan Zhang
Duane S. Boning
Cho-Jui Hsieh
AAML
142
117
0
27 Feb 2019
Verification of Non-Linear Specifications for Neural Networks
Chongli Qin
Krishnamurthy Dvijotham
Dvijotham
Brendan O'Donoghue
Rudy Bunel
Robert Stanforth
Sven Gowal
J. Uesato
G. Swirszcz
Pushmeet Kohli
AAML
68
44
0
25 Feb 2019
A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks
Hadi Salman
Greg Yang
Huan Zhang
Cho-Jui Hsieh
Pengchuan Zhang
AAML
146
271
0
23 Feb 2019
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
Eric Wong
Frank R. Schmidt
J. Zico Kolter
AAML
95
211
0
21 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
210
2,056
0
08 Feb 2019
Augmenting Model Robustness with Transformation-Invariant Attacks
Houpu Yao
Zhe Wang
Guangyu Nie
Yassine Mazboudi
Yezhou Yang
Yi Ren
AAML
OOD
31
3
0
31 Jan 2019
Theoretically Principled Trade-off between Robustness and Accuracy
Hongyang R. Zhang
Yaodong Yu
Jiantao Jiao
Eric Xing
L. Ghaoui
Michael I. Jordan
187
2,566
0
24 Jan 2019
The Limitations of Adversarial Training and the Blind-Spot Attack
Huan Zhang
Hongge Chen
Zhao Song
Duane S. Boning
Inderjit S. Dhillon
Cho-Jui Hsieh
AAML
73
145
0
15 Jan 2019
Adversarial Attacks, Regression, and Numerical Stability Regularization
A. Nguyen
Edward Raff
AAML
44
30
0
07 Dec 2018
Previous
1
2
3
4
5
6
Next