ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00420
  4. Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
    AAML
ArXivPDFHTML

Papers citing "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"

50 / 521 papers shown
Title
Purify++: Improving Diffusion-Purification with Advanced Diffusion
  Models and Control of Randomness
Purify++: Improving Diffusion-Purification with Advanced Diffusion Models and Control of Randomness
Boya Zhang
Weijian Luo
Zhihua Zhang
29
10
0
28 Oct 2023
Toward Stronger Textual Attack Detectors
Toward Stronger Textual Attack Detectors
Pierre Colombo
Marine Picot
Nathan Noiry
Guillaume Staerman
Pablo Piantanida
38
5
0
21 Oct 2023
On the Over-Memorization During Natural, Robust and Catastrophic
  Overfitting
On the Over-Memorization During Natural, Robust and Catastrophic Overfitting
Runqi Lin
Chaojian Yu
Bo Han
Tongliang Liu
22
7
0
13 Oct 2023
Promoting Robustness of Randomized Smoothing: Two Cost-Effective
  Approaches
Promoting Robustness of Randomized Smoothing: Two Cost-Effective Approaches
Linbo Liu
T. Hoang
Lam M. Nguyen
Tsui-Wei Weng
AAML
19
0
0
11 Oct 2023
A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
  Neural Networks
A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks
Yang Wang
B. Dong
Ke Xu
Haiyin Piao
Yufei Ding
Baocai Yin
Xin Yang
AAML
26
3
0
10 Oct 2023
Adversarial Examples Might be Avoidable: The Role of Data Concentration
  in Adversarial Robustness
Adversarial Examples Might be Avoidable: The Role of Data Concentration in Adversarial Robustness
Ambar Pal
Huaijin Hao
René Vidal
26
8
0
28 Sep 2023
Certifying LLM Safety against Adversarial Prompting
Certifying LLM Safety against Adversarial Prompting
Aounon Kumar
Chirag Agarwal
Suraj Srinivas
Aaron Jiaxun Li
S. Feizi
Himabindu Lakkaraju
AAML
27
164
0
06 Sep 2023
HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing Thresholds
HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing Thresholds
Hejia Geng
Peng Li
AAML
32
3
0
20 Aug 2023
Robust Mixture-of-Expert Training for Convolutional Neural Networks
Robust Mixture-of-Expert Training for Convolutional Neural Networks
Yihua Zhang
Ruisi Cai
Tianlong Chen
Guanhua Zhang
Huan Zhang
Pin-Yu Chen
Shiyu Chang
Zhangyang Wang
Sijia Liu
MoE
AAML
OOD
34
16
0
19 Aug 2023
Training on Foveated Images Improves Robustness to Adversarial Attacks
Training on Foveated Images Improves Robustness to Adversarial Attacks
Muhammad Ahmed Shah
Bhiksha Raj
AAML
25
3
0
01 Aug 2023
Doubly Robust Instance-Reweighted Adversarial Training
Doubly Robust Instance-Reweighted Adversarial Training
Daouda Sow
Sen-Fon Lin
Zhangyang Wang
Yitao Liang
AAML
OOD
33
2
0
01 Aug 2023
A LLM Assisted Exploitation of AI-Guardian
A LLM Assisted Exploitation of AI-Guardian
Nicholas Carlini
ELM
SILM
24
15
0
20 Jul 2023
Enhancing Adversarial Robustness via Score-Based Optimization
Enhancing Adversarial Robustness via Score-Based Optimization
Boya Zhang
Weijian Luo
Zhihua Zhang
DiffM
24
12
0
10 Jul 2023
Robust Ranking Explanations
Robust Ranking Explanations
Chao Chen
Chenghua Guo
Guixiang Ma
Ming Zeng
Xi Zhang
Sihong Xie
FAtt
AAML
35
0
0
08 Jul 2023
Group-based Robustness: A General Framework for Customized Robustness in
  the Real World
Group-based Robustness: A General Framework for Customized Robustness in the Real World
Weiran Lin
Keane Lucas
Neo Eyal
Lujo Bauer
Michael K. Reiter
Mahmood Sharif
OOD
AAML
22
1
0
29 Jun 2023
Adversarial Evasion Attacks Practicality in Networks: Testing the Impact of Dynamic Learning
Adversarial Evasion Attacks Practicality in Networks: Testing the Impact of Dynamic Learning
Mohamed el Shehaby
Ashraf Matrawy
AAML
22
7
0
08 Jun 2023
On the Importance of Backbone to the Adversarial Robustness of Object Detectors
On the Importance of Backbone to the Adversarial Robustness of Object Detectors
Xiao-Li Li
Hang Chen
Xiaolin Hu
AAML
38
4
0
27 May 2023
Certified Zeroth-order Black-Box Defense with Robust UNet Denoiser
Certified Zeroth-order Black-Box Defense with Robust UNet Denoiser
Astha Verma
A. Subramanyam
Siddhesh Bangar
Naman Lal
R. Shah
Shiníchi Satoh
29
4
0
13 Apr 2023
On the Adversarial Inversion of Deep Biometric Representations
On the Adversarial Inversion of Deep Biometric Representations
Gioacchino Tangari
Shreesh Keskar
H. Asghar
Dali Kaafar
AAML
31
2
0
12 Apr 2023
Generating Adversarial Samples in Mini-Batches May Be Detrimental To
  Adversarial Robustness
Generating Adversarial Samples in Mini-Batches May Be Detrimental To Adversarial Robustness
T. Redgrave
Colton R. Crum
AAML
21
0
0
30 Mar 2023
Beyond Empirical Risk Minimization: Local Structure Preserving
  Regularization for Improving Adversarial Robustness
Beyond Empirical Risk Minimization: Local Structure Preserving Regularization for Improving Adversarial Robustness
Wei Wei
Jiahuan Zhou
Yingying Wu
AAML
13
0
0
29 Mar 2023
Provable Robustness for Streaming Models with a Sliding Window
Provable Robustness for Streaming Models with a Sliding Window
Aounon Kumar
Vinu Sankar Sadasivan
S. Feizi
OOD
AAML
AI4TS
11
1
0
28 Mar 2023
Anti-DreamBooth: Protecting users from personalized text-to-image
  synthesis
Anti-DreamBooth: Protecting users from personalized text-to-image synthesis
T. Le
Hao Phung
Thuan Hoang Nguyen
Quan Dao
Ngoc N. Tran
Anh Tran
19
91
0
27 Mar 2023
Enhancing Multiple Reliability Measures via Nuisance-extended
  Information Bottleneck
Enhancing Multiple Reliability Measures via Nuisance-extended Information Bottleneck
Jongheon Jeong
Sihyun Yu
Hankook Lee
Jinwoo Shin
AAML
38
0
0
24 Mar 2023
Generalist: Decoupling Natural and Robust Generalization
Generalist: Decoupling Natural and Robust Generalization
Hongjun Wang
Yisen Wang
OOD
AAML
46
14
0
24 Mar 2023
Boosting Verified Training for Robust Image Classifications via
  Abstraction
Boosting Verified Training for Robust Image Classifications via Abstraction
Zhaodi Zhang
Zhiyi Xue
Yang Chen
Si Liu
Yueling Zhang
J. Liu
Min Zhang
33
4
0
21 Mar 2023
Immune Defense: A Novel Adversarial Defense Mechanism for Preventing the
  Generation of Adversarial Examples
Immune Defense: A Novel Adversarial Defense Mechanism for Preventing the Generation of Adversarial Examples
Jinwei Wang
Hao Wu
Haihua Wang
Jiawei Zhang
X. Luo
Bin Ma
AAML
23
0
0
08 Mar 2023
A Comprehensive Study on Robustness of Image Classification Models:
  Benchmarking and Rethinking
A Comprehensive Study on Robustness of Image Classification Models: Benchmarking and Rethinking
Chang-Shu Liu
Yinpeng Dong
Wenzhao Xiang
X. Yang
Hang Su
Junyi Zhu
YueFeng Chen
Yuan He
H. Xue
Shibao Zheng
OOD
VLM
AAML
27
72
0
28 Feb 2023
Randomness in ML Defenses Helps Persistent Attackers and Hinders
  Evaluators
Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators
Keane Lucas
Matthew Jagielski
Florian Tramèr
Lujo Bauer
Nicholas Carlini
AAML
25
9
0
27 Feb 2023
Less is More: Data Pruning for Faster Adversarial Training
Less is More: Data Pruning for Faster Adversarial Training
Yize Li
Pu Zhao
X. Lin
B. Kailkhura
Ryan Goldh
AAML
15
9
0
23 Feb 2023
PAD: Towards Principled Adversarial Malware Detection Against Evasion
  Attacks
PAD: Towards Principled Adversarial Malware Detection Against Evasion Attacks
Deqiang Li
Shicheng Cui
Yun Li
Jia Xu
Fu Xiao
Shouhuai Xu
AAML
48
17
0
22 Feb 2023
Making Substitute Models More Bayesian Can Enhance Transferability of
  Adversarial Examples
Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples
Qizhang Li
Yiwen Guo
W. Zuo
Hao Chen
AAML
27
35
0
10 Feb 2023
On the Robustness of Randomized Ensembles to Adversarial Perturbations
On the Robustness of Randomized Ensembles to Adversarial Perturbations
Hassan Dbouk
Naresh R Shanbhag
AAML
23
7
0
02 Feb 2023
Are Defenses for Graph Neural Networks Robust?
Are Defenses for Graph Neural Networks Robust?
Felix Mujkanovic
Simon Geisler
Stephan Günnemann
Aleksandar Bojchevski
OOD
AAML
19
56
0
31 Jan 2023
RS-Del: Edit Distance Robustness Certificates for Sequence Classifiers
  via Randomized Deletion
RS-Del: Edit Distance Robustness Certificates for Sequence Classifiers via Randomized Deletion
Zhuoqun Huang
Neil G. Marchant
Keane Lucas
Lujo Bauer
O. Ohrimenko
Benjamin I. P. Rubinstein
AAML
24
15
0
31 Jan 2023
Language-Driven Anchors for Zero-Shot Adversarial Robustness
Language-Driven Anchors for Zero-Shot Adversarial Robustness
Xiao-Li Li
Wei Emma Zhang
Yining Liu
Zhan Hu
Bo-Wen Zhang
Xiaolin Hu
26
8
0
30 Jan 2023
Improving Adversarial Transferability with Scheduled Step Size and Dual
  Example
Improving Adversarial Transferability with Scheduled Step Size and Dual Example
Zeliang Zhang
Peihan Liu
Xiaosen Wang
Chenliang Xu
AAML
21
3
0
30 Jan 2023
Improving the Accuracy-Robustness Trade-Off of Classifiers via Adaptive
  Smoothing
Improving the Accuracy-Robustness Trade-Off of Classifiers via Adaptive Smoothing
Yatong Bai
Brendon G. Anderson
Aerin Kim
Somayeh Sojoudi
AAML
30
18
0
29 Jan 2023
A Study on FGSM Adversarial Training for Neural Retrieval
A Study on FGSM Adversarial Training for Neural Retrieval
Simon Lupart
S. Clinchant
AAML
24
7
0
25 Jan 2023
A Data-Centric Approach for Improving Adversarial Training Through the
  Lens of Out-of-Distribution Detection
A Data-Centric Approach for Improving Adversarial Training Through the Lens of Out-of-Distribution Detection
Mohammad Azizmalayeri
Arman Zarei
Alireza Isavand
M. T. Manzuri
M. Rohban
OODD
35
0
0
25 Jan 2023
Explainability and Robustness of Deep Visual Classification Models
Explainability and Robustness of Deep Visual Classification Models
Jindong Gu
AAML
39
2
0
03 Jan 2023
On the Connection between Invariant Learning and Adversarial Training
  for Out-of-Distribution Generalization
On the Connection between Invariant Learning and Adversarial Training for Out-of-Distribution Generalization
Shiji Xin
Yifei Wang
Jingtong Su
Yisen Wang
OOD
21
7
0
18 Dec 2022
Confidence-aware Training of Smoothed Classifiers for Certified
  Robustness
Confidence-aware Training of Smoothed Classifiers for Certified Robustness
Jongheon Jeong
Seojin Kim
Jinwoo Shin
AAML
19
7
0
18 Dec 2022
Robust Explanation Constraints for Neural Networks
Robust Explanation Constraints for Neural Networks
Matthew Wicker
Juyeon Heo
Luca Costabello
Adrian Weller
FAtt
21
17
0
16 Dec 2022
Adversarial Example Defense via Perturbation Grading Strategy
Adversarial Example Defense via Perturbation Grading Strategy
Shaowei Zhu
Wanli Lyu
Bin Li
Z. Yin
Bin Luo
AAML
25
1
0
16 Dec 2022
On Evaluating Adversarial Robustness of Chest X-ray Classification:
  Pitfalls and Best Practices
On Evaluating Adversarial Robustness of Chest X-ray Classification: Pitfalls and Best Practices
Salah Ghamizi
Maxime Cordy
Michail Papadakis
Yves Le Traon
OOD
11
2
0
15 Dec 2022
Alternating Objectives Generates Stronger PGD-Based Adversarial Attacks
Alternating Objectives Generates Stronger PGD-Based Adversarial Attacks
Nikolaos Antoniou
Efthymios Georgiou
Alexandros Potamianos
AAML
27
5
0
15 Dec 2022
Understanding Zero-Shot Adversarial Robustness for Large-Scale Models
Understanding Zero-Shot Adversarial Robustness for Large-Scale Models
Chengzhi Mao
Scott Geng
Junfeng Yang
Xin Eric Wang
Carl Vondrick
VLM
36
59
0
14 Dec 2022
Adversarially Robust Video Perception by Seeing Motion
Adversarially Robust Video Perception by Seeing Motion
Lingyu Zhang
Chengzhi Mao
Junfeng Yang
Carl Vondrick
VGen
AAML
34
2
0
13 Dec 2022
Robust Perception through Equivariance
Robust Perception through Equivariance
Chengzhi Mao
Lingyu Zhang
Abhishek Joshi
Junfeng Yang
Hongya Wang
Carl Vondrick
BDL
AAML
29
7
0
12 Dec 2022
Previous
12345...91011
Next