ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00420
  4. Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
    AAML
ArXivPDFHTML

Papers citing "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"

50 / 521 papers shown
Title
Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems
Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems
Guangke Chen
Sen Chen
Lingling Fan
Xiaoning Du
Zhe Zhao
Fu Song
Yang Liu
AAML
12
193
0
03 Nov 2019
Enhancing Certifiable Robustness via a Deep Model Ensemble
Enhancing Certifiable Robustness via a Deep Model Ensemble
Huan Zhang
Minhao Cheng
Cho-Jui Hsieh
25
9
0
31 Oct 2019
A Unified Framework for Data Poisoning Attack to Graph-based
  Semi-supervised Learning
A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning
Xuanqing Liu
Si Si
Xiaojin Zhu
Yang Li
Cho-Jui Hsieh
AAML
27
76
0
30 Oct 2019
Understanding and Quantifying Adversarial Examples Existence in Linear
  Classification
Understanding and Quantifying Adversarial Examples Existence in Linear Classification
Xupeng Shi
A. Ding
AAML
14
3
0
27 Oct 2019
Detection of Adversarial Attacks and Characterization of Adversarial
  Subspace
Detection of Adversarial Attacks and Characterization of Adversarial Subspace
Mohammad Esmaeilpour
P. Cardinal
Alessandro Lameiras Koerich
AAML
11
17
0
26 Oct 2019
Label Smoothing and Logit Squeezing: A Replacement for Adversarial
  Training?
Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training?
Ali Shafahi
Amin Ghiasi
Furong Huang
Tom Goldstein
AAML
19
39
0
25 Oct 2019
Adversarial Example Detection by Classification for Deep Speech
  Recognition
Adversarial Example Detection by Classification for Deep Speech Recognition
Saeid Samizade
Z. Tan
Chao Shen
X. Guan
AAML
14
35
0
22 Oct 2019
An Alternative Surrogate Loss for PGD-based Adversarial Testing
An Alternative Surrogate Loss for PGD-based Adversarial Testing
Sven Gowal
J. Uesato
Chongli Qin
Po-Sen Huang
Timothy A. Mann
Pushmeet Kohli
AAML
39
88
0
21 Oct 2019
Deep Neural Rejection against Adversarial Examples
Deep Neural Rejection against Adversarial Examples
Angelo Sotgiu
Ambra Demontis
Marco Melis
Battista Biggio
Giorgio Fumera
Xiaoyi Feng
Fabio Roli
AAML
12
68
0
01 Oct 2019
Role of Spatial Context in Adversarial Robustness for Object Detection
Role of Spatial Context in Adversarial Robustness for Object Detection
Aniruddha Saha
Akshayvarun Subramanya
Koninika Patil
Hamed Pirsiavash
ObjD
AAML
23
53
0
30 Sep 2019
Test-Time Training with Self-Supervision for Generalization under
  Distribution Shifts
Test-Time Training with Self-Supervision for Generalization under Distribution Shifts
Yu Sun
Xiaolong Wang
Zhuang Liu
John Miller
Alexei A. Efros
Moritz Hardt
TTA
OOD
24
91
0
29 Sep 2019
Impact of Low-bitwidth Quantization on the Adversarial Robustness for
  Embedded Neural Networks
Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks
Rémi Bernhard
Pierre-Alain Moëllic
J. Dutertre
AAML
MQ
22
18
0
27 Sep 2019
Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
Tianyu Pang
Kun Xu
Jun Zhu
AAML
20
103
0
25 Sep 2019
Sign-OPT: A Query-Efficient Hard-label Adversarial Attack
Sign-OPT: A Query-Efficient Hard-label Adversarial Attack
Minhao Cheng
Simranjit Singh
Patrick H. Chen
Pin-Yu Chen
Sijia Liu
Cho-Jui Hsieh
AAML
122
219
0
24 Sep 2019
Defending Against Physically Realizable Attacks on Image Classification
Defending Against Physically Realizable Attacks on Image Classification
Tong Wu
Liang Tong
Yevgeniy Vorobeychik
AAML
14
125
0
20 Sep 2019
Sparse and Imperceivable Adversarial Attacks
Sparse and Imperceivable Adversarial Attacks
Francesco Croce
Matthias Hein
AAML
28
199
0
11 Sep 2019
Protecting Neural Networks with Hierarchical Random Switching: Towards
  Better Robustness-Accuracy Trade-off for Stochastic Defenses
Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses
Xiao Wang
Siyue Wang
Pin-Yu Chen
Yanzhi Wang
Brian Kulis
Xue Lin
S. Chin
AAML
6
42
0
20 Aug 2019
Implicit Deep Learning
Implicit Deep Learning
L. Ghaoui
Fangda Gu
Bertrand Travacca
Armin Askari
Alicia Y. Tsai
AI4CE
34
176
0
17 Aug 2019
Defense Against Adversarial Attacks Using Feature Scattering-based
  Adversarial Training
Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training
Haichao Zhang
Jianyu Wang
AAML
21
230
0
24 Jul 2019
Towards Adversarially Robust Object Detection
Towards Adversarially Robust Object Detection
Haichao Zhang
Jianyu Wang
AAML
ObjD
23
130
0
24 Jul 2019
Structure-Invariant Testing for Machine Translation
Structure-Invariant Testing for Machine Translation
Pinjia He
Clara Meister
Z. Su
21
104
0
19 Jul 2019
Detecting and Diagnosing Adversarial Images with Class-Conditional
  Capsule Reconstructions
Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions
Yao Qin
Nicholas Frosst
S. Sabour
Colin Raffel
G. Cottrell
Geoffrey E. Hinton
GAN
AAML
17
71
0
05 Jul 2019
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary
  Attack
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
Francesco Croce
Matthias Hein
AAML
29
474
0
03 Jul 2019
Accurate, reliable and fast robustness evaluation
Accurate, reliable and fast robustness evaluation
Wieland Brendel
Jonas Rauber
Matthias Kümmerer
Ivan Ustyuzhaninov
Matthias Bethge
AAML
OOD
11
111
0
01 Jul 2019
Evolving Robust Neural Architectures to Defend from Adversarial Attacks
Evolving Robust Neural Architectures to Defend from Adversarial Attacks
Shashank Kotyan
Danilo Vasconcellos Vargas
OOD
AAML
24
36
0
27 Jun 2019
Defending Adversarial Attacks by Correcting logits
Defending Adversarial Attacks by Correcting logits
Yifeng Li
Lingxi Xie
Ya-Qin Zhang
Rui Zhang
Yanfeng Wang
Qi Tian
AAML
29
5
0
26 Jun 2019
A Computationally Efficient Method for Defending Adversarial Deep
  Learning Attacks
A Computationally Efficient Method for Defending Adversarial Deep Learning Attacks
R. Sahay
Rehana Mahfuz
Aly El Gamal
AAML
11
5
0
13 Jun 2019
Provably Robust Deep Learning via Adversarially Trained Smoothed
  Classifiers
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
Hadi Salman
Greg Yang
Jungshian Li
Pengchuan Zhang
Huan Zhang
Ilya P. Razenshteyn
Sébastien Bubeck
AAML
25
535
0
09 Jun 2019
Adversarial Attack Generation Empowered by Min-Max Optimization
Adversarial Attack Generation Empowered by Min-Max Optimization
Jingkang Wang
Tianyun Zhang
Sijia Liu
Pin-Yu Chen
Jiacen Xu
M. Fardad
B. Li
AAML
25
35
0
09 Jun 2019
Provably Robust Boosted Decision Stumps and Trees against Adversarial
  Attacks
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks
Maksym Andriushchenko
Matthias Hein
20
61
0
08 Jun 2019
Robustness for Non-Parametric Classification: A Generic Attack and
  Defense
Robustness for Non-Parametric Classification: A Generic Attack and Defense
Yao-Yuan Yang
Cyrus Rashtchian
Yizhen Wang
Kamalika Chaudhuri
SILM
AAML
26
42
0
07 Jun 2019
Enhancing Gradient-based Attacks with Symbolic Intervals
Enhancing Gradient-based Attacks with Symbolic Intervals
Shiqi Wang
Yizheng Chen
Ahmed Abdou
Suman Jana
AAML
15
15
0
05 Jun 2019
Multi-way Encoding for Robustness
Multi-way Encoding for Robustness
Donghyun Kim
Sarah Adel Bargal
Jianming Zhang
Stan Sclaroff
AAML
15
2
0
05 Jun 2019
Enhancing Transformation-based Defenses using a Distribution Classifier
Enhancing Transformation-based Defenses using a Distribution Classifier
C. Kou
H. Lee
E. Chang
Teck Khim Ng
28
3
0
01 Jun 2019
Bypassing Backdoor Detection Algorithms in Deep Learning
Bypassing Backdoor Detection Algorithms in Deep Learning
T. Tan
Reza Shokri
FedML
AAML
22
149
0
31 May 2019
Robust Sparse Regularization: Simultaneously Optimizing Neural Network
  Robustness and Compactness
Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness
Adnan Siraj Rakin
Zhezhi He
Li Yang
Yanzhi Wang
Liqiang Wang
Deliang Fan
AAML
32
21
0
30 May 2019
Certifiably Robust Interpretation in Deep Learning
Certifiably Robust Interpretation in Deep Learning
Alexander Levine
Sahil Singla
S. Feizi
FAtt
AAML
20
63
0
28 May 2019
Adversarially Robust Learning Could Leverage Computational Hardness
Adversarially Robust Learning Could Leverage Computational Hardness
Sanjam Garg
S. Jha
Saeed Mahloujifar
Mohammad Mahmoody
AAML
14
24
0
28 May 2019
Scaleable input gradient regularization for adversarial robustness
Scaleable input gradient regularization for adversarial robustness
Chris Finlay
Adam M. Oberman
AAML
10
77
0
27 May 2019
Enhancing Adversarial Defense by k-Winners-Take-All
Enhancing Adversarial Defense by k-Winners-Take-All
Chang Xiao
Peilin Zhong
Changxi Zheng
AAML
11
97
0
25 May 2019
Interpreting Adversarially Trained Convolutional Neural Networks
Interpreting Adversarially Trained Convolutional Neural Networks
Tianyuan Zhang
Zhanxing Zhu
AAML
GAN
FAtt
25
157
0
23 May 2019
What Do Adversarially Robust Models Look At?
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
16
5
0
19 May 2019
Robustification of deep net classifiers by key based diversified
  aggregation with pre-filtering
Robustification of deep net classifiers by key based diversified aggregation with pre-filtering
O. Taran
Shideh Rezaeifar
T. Holotyak
S. Voloshynovskiy
AAML
22
1
0
14 May 2019
Adversarial Training and Robustness for Multiple Perturbations
Adversarial Training and Robustness for Multiple Perturbations
Florian Tramèr
Dan Boneh
AAML
SILM
17
374
0
30 Apr 2019
Adversarial Defense Through Network Profiling Based Path Extraction
Adversarial Defense Through Network Profiling Based Path Extraction
Yuxian Qiu
Jingwen Leng
Cong Guo
Quan Chen
C. Li
M. Guo
Yuhao Zhu
AAML
16
50
0
17 Apr 2019
Adversarial Learning in Statistical Classification: A Comprehensive
  Review of Defenses Against Attacks
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks
David J. Miller
Zhen Xiang
G. Kesidis
AAML
8
35
0
12 Apr 2019
Evading Defenses to Transferable Adversarial Examples by
  Translation-Invariant Attacks
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks
Yinpeng Dong
Tianyu Pang
Hang Su
Jun Zhu
SILM
AAML
19
827
0
05 Apr 2019
Interpreting Adversarial Examples by Activation Promotion and
  Suppression
Interpreting Adversarial Examples by Activation Promotion and Suppression
Kaidi Xu
Sijia Liu
Gaoyuan Zhang
Mengshu Sun
Pu Zhao
Quanfu Fan
Chuang Gan
X. Lin
AAML
FAtt
12
43
0
03 Apr 2019
Adversarial Defense by Restricting the Hidden Space of Deep Neural
  Networks
Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks
Aamir Mustafa
Salman Khan
Munawar Hayat
Roland Göcke
Jianbing Shen
Ling Shao
AAML
9
151
0
01 Apr 2019
Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search
Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search
Adnan Siraj Rakin
Zhezhi He
Deliang Fan
AAML
14
219
0
28 Mar 2019
Previous
123...101189
Next