Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.00420
Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"
50 / 579 papers shown
Title
Learning Black-Box Attackers with Transferable Priors and Query Feedback
Jiancheng Yang
Yangzhou Jiang
Xiaoyang Huang
Bingbing Ni
Chenglong Zhao
AAML
18
81
0
21 Oct 2020
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
29
48
0
19 Oct 2020
A Generative Model based Adversarial Security of Deep Learning and Linear Classifier Models
Ferhat Ozgur Catak
Samed Sivaslioglu
Kevser Sahinbas
AAML
21
7
0
17 Oct 2020
A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning
Hongjun Wang
Guanbin Li
Xiaobai Liu
Liang Lin
GAN
AAML
16
22
0
15 Oct 2020
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Sven Gowal
Chongli Qin
J. Uesato
Timothy A. Mann
Pushmeet Kohli
AAML
17
323
0
07 Oct 2020
Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder
Alvin Chan
Yi Tay
Yew-Soon Ong
Aston Zhang
SILM
13
57
0
06 Oct 2020
Understanding Catastrophic Overfitting in Single-step Adversarial Training
Hoki Kim
Woojin Lee
Jaewook Lee
AAML
11
107
0
05 Oct 2020
An Empirical Study of DNNs Robustification Inefficacy in Protecting Visual Recommenders
V. W. Anelli
T. D. Noia
Daniele Malitesta
Felice Antonio Merra
AAML
19
2
0
02 Oct 2020
Adversarial Robustness of Stabilized NeuralODEs Might be from Obfuscated Gradients
Yifei Huang
Yaodong Yu
Hongyang R. Zhang
Yi-An Ma
Yuan Yao
AAML
29
26
0
28 Sep 2020
Optimal Provable Robustness of Quantum Classification via Quantum Hypothesis Testing
Maurice Weber
Nana Liu
Bo-wen Li
Ce Zhang
Zhikuan Zhao
AAML
29
28
0
21 Sep 2020
Adversarial Training with Stochastic Weight Average
Joong-won Hwang
Youngwan Lee
Sungchan Oh
Yuseok Bae
OOD
AAML
19
11
0
21 Sep 2020
Certifying Confidence via Randomized Smoothing
Aounon Kumar
Alexander Levine
S. Feizi
Tom Goldstein
UQCV
28
38
0
17 Sep 2020
Malicious Network Traffic Detection via Deep Learning: An Information Theoretic View
Erick Galinkin
AAML
15
0
0
16 Sep 2020
Input Hessian Regularization of Neural Networks
Waleed Mustafa
Robert A. Vandermeulen
Marius Kloft
AAML
14
12
0
14 Sep 2020
Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks
Wei-An Lin
Chun Pong Lau
Alexander Levine
Ramalingam Chellappa
S. Feizi
AAML
81
60
0
05 Sep 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
W. R. Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
19
215
0
04 Sep 2020
ASTRAL: Adversarial Trained LSTM-CNN for Named Entity Recognition
Jiuniu Wang
Wenjia Xu
Xingyu Fu
Guangluan Xu
Yirong Wu
20
57
0
02 Sep 2020
Simulating Unknown Target Models for Query-Efficient Black-box Attacks
Chen Ma
L. Chen
Junhai Yong
MLAU
OOD
41
17
0
02 Sep 2020
Adversarially Robust Neural Architectures
Minjing Dong
Yanxi Li
Yunhe Wang
Chang Xu
AAML
OOD
34
48
0
02 Sep 2020
Yet Another Intermediate-Level Attack
Qizhang Li
Yiwen Guo
Hao Chen
AAML
24
51
0
20 Aug 2020
Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training
Alfred Laugros
A. Caplier
Matthieu Ospici
AAML
19
19
0
19 Aug 2020
Adversarial Examples on Object Recognition: A Comprehensive Survey
A. Serban
E. Poll
Joost Visser
AAML
25
73
0
07 Aug 2020
Anti-Bandit Neural Architecture Search for Model Defense
Hanlin Chen
Baochang Zhang
Shenjun Xue
Xuan Gong
Hong Liu
Rongrong Ji
David Doermann
AAML
14
33
0
03 Aug 2020
Stylized Adversarial Defense
Muzammal Naseer
Salman Khan
Munawar Hayat
F. Khan
Fatih Porikli
GAN
AAML
20
16
0
29 Jul 2020
RANDOM MASK: Towards Robust Convolutional Neural Networks
Tiange Luo
Tianle Cai
Mengxiao Zhang
Siyu Chen
Liwei Wang
AAML
OOD
16
17
0
27 Jul 2020
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems
H. Abdullah
Kevin Warren
Vincent Bindschaedler
Nicolas Papernot
Patrick Traynor
AAML
24
128
0
13 Jul 2020
How benign is benign overfitting?
Amartya Sanyal
P. Dokania
Varun Kanade
Philip H. S. Torr
NoLa
AAML
23
57
0
08 Jul 2020
Improving Calibration through the Relationship with Adversarial Robustness
Yao Qin
Xuezhi Wang
Alex Beutel
Ed H. Chi
AAML
27
25
0
29 Jun 2020
Proper Network Interpretability Helps Adversarial Robustness in Classification
Akhilan Boopathy
Sijia Liu
Gaoyuan Zhang
Cynthia Liu
Pin-Yu Chen
Shiyu Chang
Luca Daniel
AAML
FAtt
19
66
0
26 Jun 2020
Uncovering the Connections Between Adversarial Transferability and Knowledge Transferability
Kaizhao Liang
Jacky Y. Zhang
Boxin Wang
Zhuolin Yang
Oluwasanmi Koyejo
B. Li
AAML
28
25
0
25 Jun 2020
On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them
Chen Liu
Mathieu Salzmann
Tao R. Lin
Ryota Tomioka
Sabine Süsstrunk
AAML
19
81
0
15 Jun 2020
Adversarial Self-Supervised Contrastive Learning
Minseon Kim
Jihoon Tack
Sung Ju Hwang
SSL
20
246
0
13 Jun 2020
Large-Scale Adversarial Training for Vision-and-Language Representation Learning
Zhe Gan
Yen-Chun Chen
Linjie Li
Chen Zhu
Yu Cheng
Jingjing Liu
ObjD
VLM
29
488
0
11 Jun 2020
Feature Purification: How Adversarial Training Performs Robust Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
MLT
AAML
27
146
0
20 May 2020
Increasing-Margin Adversarial (IMA) Training to Improve Adversarial Robustness of Neural Networks
Linhai Ma
Liang Liang
AAML
23
18
0
19 May 2020
PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking
Chong Xiang
A. Bhagoji
Vikash Sehwag
Prateek Mittal
AAML
22
29
0
17 May 2020
Encryption Inspired Adversarial Defense for Visual Classification
Maungmaung Aprilpyone
Hitoshi Kiya
16
32
0
16 May 2020
Towards Understanding the Adversarial Vulnerability of Skeleton-based Action Recognition
Tianhang Zheng
Sheng Liu
Changyou Chen
Junsong Yuan
Baochun Li
K. Ren
AAML
19
17
0
14 May 2020
DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses
Yaxin Li
Wei Jin
Han Xu
Jiliang Tang
AAML
19
129
0
13 May 2020
Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder
Guanlin Li
Shuya Ding
Jun-Jie Luo
Chang-rui Liu
AAML
42
19
0
06 May 2020
Adversarial Training against Location-Optimized Adversarial Patches
Sukrut Rao
David Stutz
Bernt Schiele
AAML
11
91
0
05 May 2020
Robust Encodings: A Framework for Combating Adversarial Typos
Erik Jones
Robin Jia
Aditi Raghunathan
Percy Liang
AAML
130
102
0
04 May 2020
Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability
H. Aghakhani
Dongyu Meng
Yu-Xiang Wang
Christopher Kruegel
Giovanni Vigna
AAML
20
105
0
01 May 2020
Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks
Pranjal Awasthi
Natalie Frank
M. Mohri
AAML
26
56
0
28 Apr 2020
Adversarial Attacks and Defenses: An Interpretation Perspective
Ninghao Liu
Mengnan Du
Ruocheng Guo
Huan Liu
Xia Hu
AAML
26
8
0
23 Apr 2020
Scalable Attack on Graph Data by Injecting Vicious Nodes
Jihong Wang
Minnan Luo
Fnu Suya
Jundong Li
Z. Yang
Q. Zheng
AAML
GNN
19
85
0
22 Apr 2020
Single-step Adversarial training with Dropout Scheduling
S. VivekB.
R. Venkatesh Babu
OOD
AAML
16
71
0
18 Apr 2020
Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
Michael Everett
Bjorn Lutjens
Jonathan P. How
AAML
13
41
0
11 Apr 2020
Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
Tianlong Chen
Sijia Liu
Shiyu Chang
Yu Cheng
Lisa Amini
Zhangyang Wang
AAML
18
246
0
28 Mar 2020
DaST: Data-free Substitute Training for Adversarial Attacks
Mingyi Zhou
Jing Wu
Yipeng Liu
Shuaicheng Liu
Ce Zhu
11
142
0
28 Mar 2020
Previous
1
2
3
...
10
11
12
7
8
9
Next