ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.08347
  4. Cited By
On Adaptive Attacks to Adversarial Example Defenses

On Adaptive Attacks to Adversarial Example Defenses

19 February 2020
Florian Tramèr
Nicholas Carlini
Wieland Brendel
A. Madry
    AAML
ArXivPDFHTML

Papers citing "On Adaptive Attacks to Adversarial Example Defenses"

50 / 540 papers shown
Title
Attacking Perceptual Similarity Metrics
Attacking Perceptual Similarity Metrics
Abhijay Ghildyal
Feng Liu
AAML
15
8
0
15 May 2023
Understanding Noise-Augmented Training for Randomized Smoothing
Understanding Noise-Augmented Training for Randomized Smoothing
Ambar Pal
Jeremias Sulam
AAML
13
7
0
08 May 2023
TAPS: Connecting Certified and Adversarial Training
TAPS: Connecting Certified and Adversarial Training
Yuhao Mao
Mark Niklas Muller
Marc Fischer
Martin Vechev
AAML
11
10
0
08 May 2023
Adversarial Examples Detection with Enhanced Image Difference Features
  based on Local Histogram Equalization
Adversarial Examples Detection with Enhanced Image Difference Features based on Local Histogram Equalization
Z. Yin
Shaowei Zhu
Han Su
Jianteng Peng
Wanli Lyu
Bin Luo
AAML
16
2
0
08 May 2023
Madvex: Instrumentation-based Adversarial Attacks on Machine Learning
  Malware Detection
Madvex: Instrumentation-based Adversarial Attacks on Machine Learning Malware Detection
Yang Cai
Felix Mächtle
C. Daskalakis
Volodymyr Bezsmertnyi
T. Eisenbarth
AAML
15
7
0
04 May 2023
New Adversarial Image Detection Based on Sentiment Analysis
New Adversarial Image Detection Based on Sentiment Analysis
Yulong Wang
Tianxiang Li
Shenghong Li
Xinnan Yuan
W. Ni
AAML
15
9
0
03 May 2023
Stratified Adversarial Robustness with Rejection
Stratified Adversarial Robustness with Rejection
Jiefeng Chen
Jayaram Raghuram
Jihye Choi
Xi Wu
Yingyu Liang
S. Jha
12
2
0
02 May 2023
Revisiting Robustness in Graph Machine Learning
Revisiting Robustness in Graph Machine Learning
Lukas Gosch
Daniel Sturm
Simon Geisler
Stephan Günnemann
AAML
OOD
61
21
0
01 May 2023
RNN-Guard: Certified Robustness Against Multi-frame Attacks for
  Recurrent Neural Networks
RNN-Guard: Certified Robustness Against Multi-frame Attacks for Recurrent Neural Networks
Yunruo Zhang
Tianyu Du
S. Ji
Peng Tang
Shanqing Guo
AAML
15
2
0
17 Apr 2023
Exploring the Connection between Robust and Generative Models
Exploring the Connection between Robust and Generative Models
Senad Beadini
I. Masi
AAML
9
1
0
08 Apr 2023
Robust Deep Learning Models Against Semantic-Preserving Adversarial
  Attack
Robust Deep Learning Models Against Semantic-Preserving Adversarial Attack
Dashan Gao
Yunce Zhao
Yinghua Yao
Zeqi Zhang
Bifei Mao
Xin Yao
AAML
21
0
0
08 Apr 2023
Does Prompt-Tuning Language Model Ensure Privacy?
Does Prompt-Tuning Language Model Ensure Privacy?
Shangyu Xie
Wei Dai
Esha Ghosh
Sambuddha Roy
Dan Schwartz
Kim Laine
SILM
43
4
0
07 Apr 2023
Fooling the Image Dehazing Models by First Order Gradient
Fooling the Image Dehazing Models by First Order Gradient
Jie Gui
Xiaofeng Cong
Chengwei Peng
Yuan Yan Tang
James T. Kwok
AAML
11
8
0
30 Mar 2023
Provable Robustness for Streaming Models with a Sliding Window
Provable Robustness for Streaming Models with a Sliding Window
Aounon Kumar
Vinu Sankar Sadasivan
S. Feizi
OOD
AAML
AI4TS
11
1
0
28 Mar 2023
EMShepherd: Detecting Adversarial Samples via Side-channel Leakage
EMShepherd: Detecting Adversarial Samples via Side-channel Leakage
Ruyi Ding
Gongye Cheng
Siyue Wang
A. A. Ding
Yunsi Fei
AAML
13
6
0
27 Mar 2023
Adversarial Attack and Defense for Medical Image Analysis: Methods and
  Applications
Adversarial Attack and Defense for Medical Image Analysis: Methods and Applications
Junhao Dong
Junxi Chen
Xiaohua Xie
Jianhuang Lai
H. Chen
AAML
MedIm
15
16
0
24 Mar 2023
Feature Separation and Recalibration for Adversarial Robustness
Feature Separation and Recalibration for Adversarial Robustness
Woo Jae Kim
Y. Cho
Junsik Jung
Sung-eui Yoon
AAML
36
18
0
24 Mar 2023
Use Perturbations when Learning from Explanations
Use Perturbations when Learning from Explanations
Juyeon Heo
Vihari Piratla
Matthew Wicker
Adrian Weller
AAML
14
1
0
11 Mar 2023
Stateful Defenses for Machine Learning Models Are Not Yet Secure Against
  Black-box Attacks
Stateful Defenses for Machine Learning Models Are Not Yet Secure Against Black-box Attacks
Ryan Feng
Ashish Hooda
Neal Mangaokar
Kassem Fawaz
S. Jha
Atul Prakash
AAML
60
11
0
11 Mar 2023
Consistent Valid Physically-Realizable Adversarial Attack against
  Crowd-flow Prediction Models
Consistent Valid Physically-Realizable Adversarial Attack against Crowd-flow Prediction Models
Hassan Ali
M. A. Butt
F. Filali
Ala I. Al-Fuqaha
Junaid Qadir
AAML
9
2
0
05 Mar 2023
Improved Robustness Against Adaptive Attacks With Ensembles and
  Error-Correcting Output Codes
Improved Robustness Against Adaptive Attacks With Ensembles and Error-Correcting Output Codes
Thomas Philippon
Christian Gagné
AAML
14
0
0
04 Mar 2023
Certified Robust Neural Networks: Generalization and Corruption
  Resistance
Certified Robust Neural Networks: Generalization and Corruption Resistance
Amine Bennouna
Ryan Lucas
Bart P. G. Van Parys
22
10
0
03 Mar 2023
Defending against Adversarial Audio via Diffusion Model
Defending against Adversarial Audio via Diffusion Model
Shutong Wu
Jiong Wang
Wei Ping
Weili Nie
Chaowei Xiao
DiffM
19
24
0
02 Mar 2023
FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases
FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases
Chong Fu
Xuhong Zhang
S. Ji
Ting Wang
Peng Lin
Yanghe Feng
Jianwei Yin
AAML
22
10
0
28 Feb 2023
A Comprehensive Study on Robustness of Image Classification Models:
  Benchmarking and Rethinking
A Comprehensive Study on Robustness of Image Classification Models: Benchmarking and Rethinking
Chang-Shu Liu
Yinpeng Dong
Wenzhao Xiang
X. Yang
Hang Su
Junyi Zhu
YueFeng Chen
Yuan He
H. Xue
Shibao Zheng
OOD
VLM
AAML
15
72
0
28 Feb 2023
Randomness in ML Defenses Helps Persistent Attackers and Hinders
  Evaluators
Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators
Keane Lucas
Matthew Jagielski
Florian Tramèr
Lujo Bauer
Nicholas Carlini
AAML
12
9
0
27 Feb 2023
Less is More: Data Pruning for Faster Adversarial Training
Less is More: Data Pruning for Faster Adversarial Training
Yize Li
Pu Zhao
X. Lin
B. Kailkhura
Ryan Goldh
AAML
10
9
0
23 Feb 2023
MalProtect: Stateful Defense Against Adversarial Query Attacks in
  ML-based Malware Detection
MalProtect: Stateful Defense Against Adversarial Query Attacks in ML-based Malware Detection
Aqib Rashid
Jose Such
AAML
22
7
0
21 Feb 2023
Reliability Assurance for Deep Neural Network Architectures Against
  Numerical Defects
Reliability Assurance for Deep Neural Network Architectures Against Numerical Defects
Linyi Li
Yuhao Zhang
Luyao Ren
Yingfei Xiong
Tao Xie
17
7
0
13 Feb 2023
Better Diffusion Models Further Improve Adversarial Training
Better Diffusion Models Further Improve Adversarial Training
Zekai Wang
Tianyu Pang
Chao Du
Min-Bin Lin
Weiwei Liu
Shuicheng Yan
DiffM
14
207
0
09 Feb 2023
A Minimax Approach Against Multi-Armed Adversarial Attacks Detection
A Minimax Approach Against Multi-Armed Adversarial Attacks Detection
Federica Granese
Marco Romanelli
S. Garg
Pablo Piantanida
AAML
19
0
0
04 Feb 2023
On the Robustness of Randomized Ensembles to Adversarial Perturbations
On the Robustness of Randomized Ensembles to Adversarial Perturbations
Hassan Dbouk
Naresh R Shanbhag
AAML
6
7
0
02 Feb 2023
Image Shortcut Squeezing: Countering Perturbative Availability Poisons
  with Compression
Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression
Zhuoran Liu
Zhengyu Zhao
Martha Larson
10
32
0
31 Jan 2023
Are Defenses for Graph Neural Networks Robust?
Are Defenses for Graph Neural Networks Robust?
Felix Mujkanovic
Simon Geisler
Stephan Günnemann
Aleksandar Bojchevski
OOD
AAML
19
56
0
31 Jan 2023
RS-Del: Edit Distance Robustness Certificates for Sequence Classifiers
  via Randomized Deletion
RS-Del: Edit Distance Robustness Certificates for Sequence Classifiers via Randomized Deletion
Zhuoqun Huang
Neil G. Marchant
Keane Lucas
Lujo Bauer
O. Ohrimenko
Benjamin I. P. Rubinstein
AAML
15
14
0
31 Jan 2023
Improving the Accuracy-Robustness Trade-Off of Classifiers via Adaptive
  Smoothing
Improving the Accuracy-Robustness Trade-Off of Classifiers via Adaptive Smoothing
Yatong Bai
Brendon G. Anderson
Aerin Kim
Somayeh Sojoudi
AAML
19
18
0
29 Jan 2023
Selecting Models based on the Risk of Damage Caused by Adversarial
  Attacks
Selecting Models based on the Risk of Damage Caused by Adversarial Attacks
Jona Klemenc
Holger Trittenbach
AAML
19
1
0
28 Jan 2023
Guidance Through Surrogate: Towards a Generic Diagnostic Attack
Guidance Through Surrogate: Towards a Generic Diagnostic Attack
Muzammal Naseer
Salman Khan
Fatih Porikli
F. Khan
AAML
12
1
0
30 Dec 2022
Confidence-aware Training of Smoothed Classifiers for Certified
  Robustness
Confidence-aware Training of Smoothed Classifiers for Certified Robustness
Jongheon Jeong
Seojin Kim
Jinwoo Shin
AAML
14
7
0
18 Dec 2022
Alternating Objectives Generates Stronger PGD-Based Adversarial Attacks
Alternating Objectives Generates Stronger PGD-Based Adversarial Attacks
Nikolaos Antoniou
Efthymios Georgiou
Alexandros Potamianos
AAML
22
5
0
15 Dec 2022
Generative Robust Classification
Generative Robust Classification
Xuwang Yin
TPM
17
0
0
14 Dec 2022
Carpet-bombing patch: attacking a deep network without usual
  requirements
Carpet-bombing patch: attacking a deep network without usual requirements
Pol Labarbarie
Adrien Chan-Hon-Tong
Stéphane Herbin
Milad Leyli-Abadi
AAML
8
1
0
12 Dec 2022
DISCO: Adversarial Defense with Local Implicit Functions
DISCO: Adversarial Defense with Local Implicit Functions
Chih-Hui Ho
Nuno Vasconcelos
AAML
21
38
0
11 Dec 2022
Multiple Perturbation Attack: Attack Pixelwise Under Different
  $\ell_p$-norms For Better Adversarial Performance
Multiple Perturbation Attack: Attack Pixelwise Under Different ℓp\ell_pℓp​-norms For Better Adversarial Performance
Ngoc N. Tran
Anh Tuan Bui
Dinh Q. Phung
Trung Le
AAML
10
1
0
05 Dec 2022
Recognizing Object by Components with Human Prior Knowledge Enhances
  Adversarial Robustness of Deep Neural Networks
Recognizing Object by Components with Human Prior Knowledge Enhances Adversarial Robustness of Deep Neural Networks
Xiao-Li Li
Ziqi Wang
Bo-Wen Zhang
Fuchun Sun
Xiaolin Hu
16
25
0
04 Dec 2022
Adversarial Rademacher Complexity of Deep Neural Networks
Adversarial Rademacher Complexity of Deep Neural Networks
Jiancong Xiao
Yanbo Fan
Ruoyu Sun
Zhimin Luo
AAML
15
22
0
27 Nov 2022
Game Theoretic Mixed Experts for Combinational Adversarial Machine
  Learning
Game Theoretic Mixed Experts for Combinational Adversarial Machine Learning
Ethan Rathbun
Kaleel Mahmood
Sohaib Ahmad
Caiwen Ding
Marten van Dijk
AAML
6
4
0
26 Nov 2022
Understanding the Vulnerability of Skeleton-based Human Activity
  Recognition via Black-box Attack
Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack
Yunfeng Diao
He-Nan Wang
Tianjia Shao
Yong-Liang Yang
Kun Zhou
David C. Hogg
Meng Wang
AAML
27
6
0
21 Nov 2022
Towards Robust Dataset Learning
Towards Robust Dataset Learning
Yihan Wu
Xinda Li
Florian Kerschbaum
Heng Huang
Hongyang R. Zhang
DD
OOD
23
10
0
19 Nov 2022
Adversarial Detection by Approximation of Ensemble Boundary
Adversarial Detection by Approximation of Ensemble Boundary
T. Windeatt
AAML
24
0
0
18 Nov 2022
Previous
12345...91011
Next