Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1805.12514
Cited By
v1
v2 (latest)
Scaling provable adversarial defenses
31 May 2018
Eric Wong
Frank R. Schmidt
J. H. Metzen
J. Zico Kolter
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Github (387★)
Papers citing
"Scaling provable adversarial defenses"
50 / 273 papers shown
Title
Adversarial Robustness against Multiple and Single
l
p
l_p
l
p
-Threat Models via Quick Fine-Tuning of Robust Classifiers
Francesco Croce
Matthias Hein
OOD
AAML
67
18
0
26 May 2021
Skew Orthogonal Convolutions
Sahil Singla
Soheil Feizi
79
69
0
24 May 2021
Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?
Vikash Sehwag
Saeed Mahloujifar
Tinashe Handina
Sihui Dai
Chong Xiang
M. Chiang
Prateek Mittal
OOD
102
131
0
19 Apr 2021
Orthogonalizing Convolutional Layers with the Cayley Transform
Asher Trockman
J. Zico Kolter
95
115
0
14 Apr 2021
A Review of Formal Methods applied to Machine Learning
Caterina Urban
Antoine Miné
93
57
0
06 Apr 2021
Domain Invariant Adversarial Learning
Matan Levi
Idan Attias
A. Kontorovich
AAML
OOD
122
11
0
01 Apr 2021
Fast Certified Robust Training with Short Warmup
Zhouxing Shi
Yihan Wang
Huan Zhang
Jinfeng Yi
Cho-Jui Hsieh
AAML
99
57
0
31 Mar 2021
Robustness Certification for Point Cloud Models
Tobias Lorenz
Anian Ruoss
Mislav Balunović
Gagandeep Singh
Martin Vechev
3DPC
101
26
0
30 Mar 2021
Learning Defense Transformers for Counterattacking Adversarial Examples
Jincheng Li
Jingyun Liang
Yifan Zhang
Jian Chen
Mingkui Tan
AAML
67
2
0
13 Mar 2021
Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Robustness Verification
Shiqi Wang
Huan Zhang
Kaidi Xu
Xue Lin
Suman Jana
Cho-Jui Hsieh
Zico Kolter
123
202
0
11 Mar 2021
PRIMA: General and Precise Neural Network Certification via Scalable Convex Hull Approximations
Mark Niklas Muller
Gleb Makarchuk
Gagandeep Singh
Markus Püschel
Martin Vechev
95
92
0
05 Mar 2021
Explaining Adversarial Vulnerability with a Data Sparsity Hypothesis
Mahsa Paknezhad
Cuong Phuc Ngo
Amadeus Aristo Winarto
Alistair Cheong
Beh Chuen Yang
Wu Jiayang
Lee Hwee Kuan
OOD
AAML
74
9
0
01 Mar 2021
Tiny Adversarial Mulit-Objective Oneshot Neural Architecture Search
Guoyang Xie
Jinbao Wang
Guo-Ding Yu
Feng Zheng
Yaochu Jin
AAML
71
6
0
28 Feb 2021
Globally-Robust Neural Networks
Klas Leino
Zifan Wang
Matt Fredrikson
AAML
OOD
159
131
0
16 Feb 2021
Low Curvature Activations Reduce Overfitting in Adversarial Training
Vasu Singla
Sahil Singla
David Jacobs
Soheil Feizi
AAML
102
47
0
15 Feb 2021
On the Paradox of Certified Training
Nikola Jovanović
Mislav Balunović
Maximilian Baader
Martin Vechev
OOD
97
13
0
12 Feb 2021
Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons
Bohang Zhang
Tianle Cai
Zhou Lu
Di He
Liwei Wang
OOD
79
51
0
10 Feb 2021
Towards Bridging the gap between Empirical and Certified Robustness against Adversarial Examples
Jay Nandy
Sudipan Saha
Wynne Hsu
Mong Li Lee
Xiaosu Zhu
AAML
64
3
0
09 Feb 2021
Efficient Certified Defenses Against Patch Attacks on Image Classifiers
J. H. Metzen
Maksym Yatsura
AAML
61
41
0
08 Feb 2021
Fast Training of Provably Robust Neural Networks by SingleProp
Akhilan Boopathy
Tsui-Wei Weng
Sijia Liu
Pin-Yu Chen
Gaoyuan Zhang
Luca Daniel
AAML
57
7
0
01 Feb 2021
Adversarial Robustness by Design through Analog Computing and Synthetic Gradients
Alessandro Cappelli
Ruben Ohana
Julien Launay
Laurent Meunier
Iacopo Poli
Florent Krzakala
AAML
123
13
0
06 Jan 2021
Adaptive Verifiable Training Using Pairwise Class Similarity
Shiqi Wang
Kevin Eykholt
Taesung Lee
Jiyong Jang
Ian Molloy
OOD
33
1
0
14 Dec 2020
Evaluating adversarial robustness in simulated cerebellum
Liu Yuezhang
Bo Li
Qifeng Chen
AAML
13
0
0
05 Dec 2020
Deterministic Certification to Adversarial Attacks via Bernstein Polynomial Approximation
Ching-Chia Kao
Jhe-Bang Ko
Chun-Shien Lu
AAML
55
1
0
28 Nov 2020
Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers
Kaidi Xu
Huan Zhang
Shiqi Wang
Yihan Wang
Suman Jana
Xue Lin
Cho-Jui Hsieh
124
188
0
27 Nov 2020
Robustified Domain Adaptation
Jiajin Zhang
Hanqing Chao
Pingkun Yan
36
4
0
18 Nov 2020
Adversarially Robust Classification based on GLRT
Bhagyashree Puranik
Upamanyu Madhow
Ramtin Pedarsani
VLM
AAML
58
4
0
16 Nov 2020
Ensemble of Models Trained by Key-based Transformed Images for Adversarially Robust Defense Against Black-box Attacks
Maungmaung Aprilpyone
Hitoshi Kiya
FedML
46
1
0
16 Nov 2020
Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations
Jinyuan Jia
Binghui Wang
Xiaoyu Cao
Hongbin Liu
Neil Zhenqiang Gong
61
24
0
15 Nov 2020
Trustworthy AI
Richa Singh
Mayank Vatsa
Nalini Ratha
53
4
0
02 Nov 2020
A Single-Loop Smoothed Gradient Descent-Ascent Algorithm for Nonconvex-Concave Min-Max Problems
Jiawei Zhang
Peijun Xiao
Ruoyu Sun
Zhi-Quan Luo
120
99
0
29 Oct 2020
Attack Agnostic Adversarial Defense via Visual Imperceptible Bound
S. Chhabra
Akshay Agarwal
Richa Singh
Mayank Vatsa
AAML
64
3
0
25 Oct 2020
Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming
Sumanth Dathathri
Krishnamurthy Dvijotham
Alexey Kurakin
Aditi Raghunathan
J. Uesato
...
Shreya Shankar
Jacob Steinhardt
Ian Goodfellow
Percy Liang
Pushmeet Kohli
AAML
107
95
0
22 Oct 2020
Certified Distributional Robustness on Smoothed Classifiers
Jungang Yang
Liyao Xiang
Pengzhi Chu
Yukun Wang
Cheng Zhou
Xinbing Wang
AAML
47
0
0
21 Oct 2020
Towards Understanding the Dynamics of the First-Order Adversaries
Zhun Deng
Hangfeng He
Jiaoyang Huang
Weijie J. Su
AAML
46
11
0
20 Oct 2020
Robust Neural Networks inspired by Strong Stability Preserving Runge-Kutta methods
Byungjoo Kim
Bryce Chudomelka
Jinyoung Park
Jaewoo Kang
Youngjoon Hong
Hyunwoo J. Kim
AAML
46
6
0
20 Oct 2020
Understanding Local Robustness of Deep Neural Networks under Natural Variations
Ziyuan Zhong
Yuchi Tian
Baishakhi Ray
AAML
61
1
0
09 Oct 2020
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Sven Gowal
Chongli Qin
J. Uesato
Timothy A. Mann
Pushmeet Kohli
AAML
73
331
0
07 Oct 2020
Block-wise Image Transformation with Secret Key for Adversarially Robust Defense
Maungmaung Aprilpyone
Hitoshi Kiya
76
57
0
02 Oct 2020
Efficient Certification of Spatial Robustness
Anian Ruoss
Maximilian Baader
Mislav Balunović
Martin Vechev
AAML
75
26
0
19 Sep 2020
SoK: Certified Robustness for Deep Neural Networks
Linyi Li
Tao Xie
Yue Liu
AAML
123
131
0
09 Sep 2020
Detection Defense Against Adversarial Attacks with Saliency Map
Dengpan Ye
Chuanxi Chen
Changrui Liu
Hao Wang
Shunzhi Jiang
AAML
57
28
0
06 Sep 2020
Adversarially Robust Learning via Entropic Regularization
Gauri Jagatap
Ameya Joshi
A. B. Chowdhury
S. Garg
Chinmay Hegde
OOD
125
11
0
27 Aug 2020
On
ℓ
p
\ell_p
ℓ
p
-norm Robustness of Ensemble Stumps and Trees
Yihan Wang
Huan Zhang
Hongge Chen
Duane S. Boning
Cho-Jui Hsieh
AAML
42
7
0
20 Aug 2020
Robust Machine Learning via Privacy/Rate-Distortion Theory
Ye Wang
Shuchin Aeron
Adnan Siraj Rakin
T. Koike-Akino
P. Moulin
OOD
65
6
0
22 Jul 2020
Learning perturbation sets for robust machine learning
Eric Wong
J. Zico Kolter
OOD
76
81
0
16 Jul 2020
Neural Network Virtual Sensors for Fuel Injection Quantities with Provable Performance Specifications
Eric Wong
Tim Schneider
Joerg Schmitt
Frank R. Schmidt
J. Zico Kolter
AAML
69
8
0
30 Jun 2020
The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron Relaxations for Neural Network Verification
Christian Tjandraatmadja
Ross Anderson
Joey Huchette
Will Ma
Krunal Patel
J. Vielma
AAML
131
89
0
24 Jun 2020
Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble
Yi Zhou
Xiaoqing Zheng
Cho-Jui Hsieh
Kai-Wei Chang
Xuanjing Huang
SILM
105
48
0
20 Jun 2020
From Discrete to Continuous Convolution Layers
Assaf Shocher
Ben Feinstein
Niv Haim
Michal Irani
54
17
0
19 Jun 2020
Previous
1
2
3
4
5
6
Next