Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
1802.00420
Cited By
v1
v2
v3
v4 (latest)
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"
32 / 1,982 papers shown
Detecting Adversarial Examples via Neural Fingerprinting
Sumanth Dathathri
Stephan Zheng
Tianwei Yin
Richard M. Murray
Yisong Yue
MLAU
AAML
163
0
0
11 Mar 2018
Style Memory: Making a Classifier Network Generative
R. Wiyatno
Jeff Orchard
111
5
0
05 Mar 2018
Understanding and Enhancing the Transferability of Adversarial Examples
Lei Wu
Zhanxing Zhu
Cheng Tai
E. Weinan
AAML
SILM
136
116
0
27 Feb 2018
Robust GANs against Dishonest Adversaries
Zhi Xu
Chengtao Li
Stefanie Jegelka
AAML
196
3
0
27 Feb 2018
Hessian-based Analysis of Large Batch Training and Robustness to Adversaries
Z. Yao
A. Gholami
Qi Lei
Kurt Keutzer
Michael W. Mahoney
420
176
0
22 Feb 2018
L2-Nonexpansive Neural Networks
Haifeng Qian
M. Wegman
229
76
0
22 Feb 2018
Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression
Nilaksh Das
Madhuri Shanbhogue
Shang-Tse Chen
Fred Hohman
Siwei Li
Li-Wei Chen
Michael E. Kounavis
Duen Horng Chau
FedML
AAML
206
245
0
19 Feb 2018
Divide, Denoise, and Defend against Adversarial Attacks
Seyed-Mohsen Moosavi-Dezfooli
A. Shrivastava
Oncel Tuzel
AAML
159
46
0
19 Feb 2018
Are Generative Classifiers More Robust to Adversarial Attacks?
Yingzhen Li
John Bradshaw
Yash Sharma
AAML
252
85
0
19 Feb 2018
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
J. Uesato
Brendan O'Donoghue
Aaron van den Oord
Pushmeet Kohli
AAML
572
636
0
15 Feb 2018
Fooling OCR Systems with Adversarial Text Images
Congzheng Song
Vitaly Shmatikov
AAML
107
53
0
15 Feb 2018
Predicting Adversarial Examples with High Confidence
A. Galloway
Graham W. Taylor
M. Moussa
AAML
135
9
0
13 Feb 2018
Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks
Yusuke Tsuzuku
Issei Sato
Masashi Sugiyama
AAML
481
344
0
12 Feb 2018
Certified Robustness to Adversarial Examples with Differential Privacy
Mathias Lécuyer
Vaggelis Atlidakis
Roxana Geambasu
Daniel J. Hsu
Suman Jana
SILM
AAML
721
988
0
09 Feb 2018
Blind Pre-Processing: A Robust Defense Method Against Adversarial Examples
Adnan Siraj Rakin
Zhezhi He
Boqing Gong
Deliang Fan
AAML
169
4
0
05 Feb 2018
First-order Adversarial Vulnerability of Neural Networks and Input Dimension
Carl-Johann Simon-Gabriel
Yann Ollivier
Léon Bottou
Bernhard Schölkopf
David Lopez-Paz
AAML
430
49
0
05 Feb 2018
Secure Detection of Image Manipulation by means of Random Feature Selection
Zhongfu Chen
B. Tondi
Xiaolong Li
R. Ni
Yao-Min Zhao
Mauro Barni
AAML
197
36
0
02 Feb 2018
Adversarial Spheres
Justin Gilmer
Luke Metz
Fartash Faghri
S. Schoenholz
M. Raghu
Martin Wattenberg
Ian Goodfellow
AAML
179
6
0
09 Jan 2018
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Nicholas Carlini
D. Wagner
AAML
215
1,146
0
05 Jan 2018
A General Framework for Adversarial Examples with Objectives
Mahmood Sharif
Sruti Bhagavatula
Lujo Bauer
Michael K. Reiter
AAML
GAN
260
216
0
31 Dec 2017
The Robust Manifold Defense: Adversarial Training using Generative Models
A. Jalal
Andrew Ilyas
C. Daskalakis
A. Dimakis
AAML
418
175
0
26 Dec 2017
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
368
1,541
0
08 Dec 2017
Generative Adversarial Perturbations
Omid Poursaeed
Isay Katsman
Bicheng Gao
Serge J. Belongie
AAML
GAN
WIGM
469
385
0
06 Dec 2017
Towards Robust Neural Networks via Random Self-ensemble
Xuanqing Liu
Minhao Cheng
Huan Zhang
Cho-Jui Hsieh
FedML
AAML
382
445
0
02 Dec 2017
On the Robustness of Semantic Segmentation Models to Adversarial Attacks
Anurag Arnab
O. Mikšík
Juil Sock
AAML
358
326
0
27 Nov 2017
Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training
Xi Wu
Uyeong Jang
Jiefeng Chen
Lingjiao Chen
S. Jha
AAML
220
21
0
21 Nov 2017
Evaluating Robustness of Neural Networks with Mixed Integer Programming
Vincent Tjeng
Kai Y. Xiao
Russ Tedrake
AAML
273
119
0
20 Nov 2017
Adversarial Attacks Beyond the Image Space
Fangyin Wei
Chenxi Liu
Yu-Siang Wang
Weichao Qiu
Lingxi Xie
Yu-Wing Tai
Chi-Keung Tang
Alan Yuille
AAML
498
160
0
20 Nov 2017
Provable defenses against adversarial examples via the convex outer adversarial polytope
Eric Wong
J. Zico Kolter
AAML
671
1,565
0
02 Nov 2017
Provably Minimally-Distorted Adversarial Examples
Nicholas Carlini
Guy Katz
Clark W. Barrett
D. Dill
AAML
210
91
0
29 Sep 2017
Improving Robustness of ML Classifiers against Realizable Evasion Attacks Using Conserved Features
Liang Tong
Yue Liu
Chen Hajaj
Chaowei Xiao
Ning Zhang
Yevgeniy Vorobeychik
AAML
OOD
265
93
0
28 Aug 2017
Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr
Alexey Kurakin
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
492
2,944
0
19 May 2017
Previous
1
2
3
...
38
39
40