Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.00420
Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"
50 / 579 papers shown
Title
Membership Inference Attacks and Defenses in Neural Network Pruning
Xiaoyong Yuan
Lan Zhang
AAML
16
44
0
07 Feb 2022
Robust Binary Models by Pruning Randomly-initialized Networks
Chen Liu
Ziqi Zhao
Sabine Süsstrunk
Mathieu Salzmann
TPM
AAML
MQ
19
4
0
03 Feb 2022
Query Efficient Decision Based Sparse Attacks Against Black-Box Deep Learning Models
Viet Vo
Ehsan Abbasnejad
D. Ranasinghe
AAML
22
14
0
31 Jan 2022
Boundary Defense Against Black-box Adversarial Attacks
Manjushree B. Aithal
Xiaohua Li
AAML
17
6
0
31 Jan 2022
Scale-Invariant Adversarial Attack for Evaluating and Enhancing Adversarial Defenses
Mengting Xu
Tao Zhang
Zhongnian Li
Daoqiang Zhang
AAML
30
1
0
29 Jan 2022
Efficient and Robust Classification for Sparse Attacks
M. Beliaev
Payam Delgosha
Hamed Hassani
Ramtin Pedarsani
AAML
16
2
0
23 Jan 2022
Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection
Siyuan Liang
Baoyuan Wu
Yanbo Fan
Xingxing Wei
Xiaochun Cao
AAML
22
70
0
22 Jan 2022
On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles
Qingzhao Zhang
Shengtuo Hu
Jiachen Sun
Qi Alfred Chen
Z. Morley Mao
AAML
35
133
0
13 Jan 2022
Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Weiran Lin
Keane Lucas
Lujo Bauer
Michael K. Reiter
Mahmood Sharif
AAML
29
5
0
28 Dec 2021
Learning Robust and Lightweight Model through Separable Structured Transformations
Xian Wei
Yanhui Huang
Yang Xu
Mingsong Chen
Hai Lan
Yuanxiang Li
Zhongfeng Wang
Xuan Tang
OOD
24
0
0
27 Dec 2021
How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness?
Xinhsuai Dong
Anh Tuan Luu
Min-Bin Lin
Shuicheng Yan
Hanwang Zhang
SILM
AAML
20
55
0
22 Dec 2021
A Theoretical View of Linear Backpropagation and Its Convergence
Ziang Li
Yiwen Guo
Haodi Liu
Changshui Zhang
AAML
16
3
0
21 Dec 2021
All You Need is RAW: Defending Against Adversarial Attacks with Camera Image Pipelines
Yuxuan Zhang
B. Dong
Felix Heide
AAML
26
8
0
16 Dec 2021
On the Convergence and Robustness of Adversarial Training
Yisen Wang
Xingjun Ma
James Bailey
Jinfeng Yi
Bowen Zhou
Quanquan Gu
AAML
194
345
0
15 Dec 2021
Temporal Shuffling for Defending Deep Action Recognition Models against Adversarial Attacks
Jaehui Hwang
Huan Zhang
Jun-Ho Choi
Cho-Jui Hsieh
Jong-Seok Lee
AAML
17
5
0
15 Dec 2021
On the Impact of Hard Adversarial Instances on Overfitting in Adversarial Training
Chen Liu
Zhichao Huang
Mathieu Salzmann
Tong Zhang
Sabine Süsstrunk
AAML
15
13
0
14 Dec 2021
Triangle Attack: A Query-efficient Decision-based Adversarial Attack
Xiaosen Wang
Zeliang Zhang
Kangheng Tong
Dihong Gong
Kun He
Zhifeng Li
Wei Liu
AAML
24
56
0
13 Dec 2021
Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses
Chun Pong Lau
Jiang-Long Liu
Hossein Souri
Wei-An Lin
S. Feizi
Ramalingam Chellappa
AAML
29
12
0
12 Dec 2021
RamBoAttack: A Robust Query Efficient Deep Neural Network Decision Exploit
Viet Vo
Ehsan Abbasnejad
D. Ranasinghe
AAML
17
9
0
10 Dec 2021
Mutual Adversarial Training: Learning together is better than going alone
Jiang-Long Liu
Chun Pong Lau
Hossein Souri
S. Feizi
Ramalingam Chellappa
OOD
AAML
35
24
0
09 Dec 2021
SoK: Anti-Facial Recognition Technology
Emily Wenger
Shawn Shan
Haitao Zheng
Ben Y. Zhao
PICV
32
13
0
08 Dec 2021
Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection
Jiangjiang Liu
Alexander Levine
Chun Pong Lau
Ramalingam Chellappa
S. Feizi
AAML
24
76
0
08 Dec 2021
Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial Robustness
Konstantinos P. Panousis
S. Chatzis
Sergios Theodoridis
BDL
AAML
60
11
0
05 Dec 2021
On the Existence of the Adversarial Bayes Classifier (Extended Version)
Pranjal Awasthi
Natalie Frank
M. Mohri
21
24
0
03 Dec 2021
Certified Adversarial Defenses Meet Out-of-Distribution Corruptions: Benchmarking Robustness and Simple Baselines
Jiachen Sun
Akshay Mehra
B. Kailkhura
Pin-Yu Chen
Dan Hendrycks
Jihun Hamm
Z. Morley Mao
AAML
28
21
0
01 Dec 2021
Adv-4-Adv: Thwarting Changing Adversarial Perturbations via Adversarial Domain Adaptation
Tianyue Zheng
Zhe Chen
Shuya Ding
Chao Cai
Jun-Jie Luo
AAML
33
5
0
01 Dec 2021
Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness
Jia-Li Yin
Lehui Xie
Wanqing Zhu
Ximeng Liu
Bo-Hao Chen
TTA
AAML
19
3
0
01 Dec 2021
Mitigating Adversarial Attacks by Distributing Different Copies to Different Users
Jiyi Zhang
Hansheng Fang
W. Tann
Ke Xu
Chengfang Fang
E. Chang
AAML
21
3
0
30 Nov 2021
Pyramid Adversarial Training Improves ViT Performance
Charles Herrmann
Kyle Sargent
Lu Jiang
Ramin Zabih
Huiwen Chang
Ce Liu
Dilip Krishnan
Deqing Sun
ViT
26
56
0
30 Nov 2021
Subspace Adversarial Training
Tao Li
Yingwen Wu
Sizhe Chen
Kun Fang
Xiaolin Huang
AAML
OOD
38
56
0
24 Nov 2021
Medical Aegis: Robust adversarial protectors for medical images
Qingsong Yao
Zecheng He
S. Kevin Zhou
AAML
MedIm
19
2
0
22 Nov 2021
Data Augmentation Can Improve Robustness
Sylvestre-Alvise Rebuffi
Sven Gowal
D. A. Calian
Florian Stimberg
Olivia Wiles
Timothy A. Mann
AAML
17
269
0
09 Nov 2021
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps
Muhammad Awais
Fengwei Zhou
Chuanlong Xie
Jiawei Li
Sung-Ho Bae
Zhenguo Li
AAML
32
17
0
09 Nov 2021
Robust and Information-theoretically Safe Bias Classifier against Adversarial Attacks
Lijia Yu
Xiao-Shan Gao
AAML
16
5
0
08 Nov 2021
LTD: Low Temperature Distillation for Robust Adversarial Training
Erh-Chung Chen
Che-Rung Lee
AAML
24
26
0
03 Nov 2021
Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks
Maksym Yatsura
J. H. Metzen
Matthias Hein
OOD
26
14
0
02 Nov 2021
Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds
Yujia Huang
Huan Zhang
Yuanyuan Shi
J Zico Kolter
Anima Anandkumar
27
76
0
02 Nov 2021
ε-weakened Robustness of Deep Neural Networks
Pei Huang
Yuting Yang
Minghao Liu
Fuqi Jia
Feifei Ma
Jian Zhang
AAML
19
18
0
29 Oct 2021
Adversarial Robustness in Multi-Task Learning: Promises and Illusions
Salah Ghamizi
Maxime Cordy
Mike Papadakis
Yves Le Traon
OOD
AAML
25
18
0
26 Oct 2021
Ensemble Federated Adversarial Training with Non-IID data
Shuang Luo
Didi Zhu
Zexi Li
Chao-Xiang Wu
FedML
25
7
0
26 Oct 2021
ADC: Adversarial attacks against object Detection that evade Context consistency checks
Mingjun Yin
Shasha Li
Chengyu Song
M. Salman Asif
A. Roy-Chowdhury
S. Krishnamurthy
AAML
24
22
0
24 Oct 2021
Improving Robustness using Generated Data
Sven Gowal
Sylvestre-Alvise Rebuffi
Olivia Wiles
Florian Stimberg
D. A. Calian
Timothy A. Mann
30
293
0
18 Oct 2021
Parameterizing Activation Functions for Adversarial Robustness
Sihui Dai
Saeed Mahloujifar
Prateek Mittal
AAML
42
32
0
11 Oct 2021
Adversarial Token Attacks on Vision Transformers
Ameya Joshi
Gauri Jagatap
C. Hegde
ViT
30
19
0
08 Oct 2021
Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks
Hanxun Huang
Yisen Wang
S. Erfani
Quanquan Gu
James Bailey
Xingjun Ma
AAML
TPM
46
100
0
07 Oct 2021
Label Noise in Adversarial Training: A Novel Perspective to Study Robust Overfitting
Chengyu Dong
Liyuan Liu
Jingbo Shang
NoLa
AAML
56
18
0
07 Oct 2021
Improving Adversarial Robustness for Free with Snapshot Ensemble
Yihao Wang
AAML
UQCV
9
1
0
07 Oct 2021
HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep Spiking Neural Networks by Training with Crafted Input Noise
Souvik Kundu
Massoud Pedram
P. Beerel
AAML
14
70
0
06 Oct 2021
MUTEN: Boosting Gradient-Based Adversarial Attacks via Mutant-Based Ensembles
Yuejun Guo
Qiang Hu
Maxime Cordy
Michail Papadakis
Yves Le Traon
AAML
24
2
0
27 Sep 2021
CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks
Mikhail Aleksandrovich Pautov
Nurislam Tursynbek
Marina Munkhoeva
Nikita Muravev
Aleksandr Petiushko
Ivan V. Oseledets
AAML
44
15
0
22 Sep 2021
Previous
1
2
3
4
5
6
...
10
11
12
Next