ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07263
  4. Cited By
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods

Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods

20 May 2017
Nicholas Carlini
D. Wagner
    AAML
ArXivPDFHTML

Papers citing "Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods"

50 / 352 papers shown
Title
Dompteur: Taming Audio Adversarial Examples
Dompteur: Taming Audio Adversarial Examples
Thorsten Eisenhofer
Lea Schonherr
Joel Frank
Lars Speckemeier
D. Kolossa
Thorsten Holz
AAML
36
24
0
10 Feb 2021
"What's in the box?!": Deflecting Adversarial Attacks by Randomly
  Deploying Adversarially-Disjoint Models
"What's in the box?!": Deflecting Adversarial Attacks by Randomly Deploying Adversarially-Disjoint Models
Sahar Abdelnabi
Mario Fritz
AAML
27
7
0
09 Feb 2021
Adversarial Attacks and Defenses in Physiological Computing: A
  Systematic Review
Adversarial Attacks and Defenses in Physiological Computing: A Systematic Review
Dongrui Wu
Jiaxin Xu
Weili Fang
Yi Zhang
Liuqing Yang
Xiaodong Xu
Hanbin Luo
Xiang Yu
AAML
27
25
0
04 Feb 2021
Increasing the Confidence of Deep Neural Networks by Coverage Analysis
Increasing the Confidence of Deep Neural Networks by Coverage Analysis
Giulio Rossolini
Alessandro Biondi
Giorgio Buttazzo
AAML
26
13
0
28 Jan 2021
Error Diffusion Halftoning Against Adversarial Examples
Error Diffusion Halftoning Against Adversarial Examples
Shao-Yuan Lo
Vishal M. Patel
DiffM
10
4
0
23 Jan 2021
Fundamental Tradeoffs in Distributionally Adversarial Training
Fundamental Tradeoffs in Distributionally Adversarial Training
M. Mehrabi
Adel Javanmard
Ryan A. Rossi
Anup B. Rao
Tung Mai
AAML
20
17
0
15 Jan 2021
The Vulnerability of Semantic Segmentation Networks to Adversarial
  Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
Andreas Bär
Jonas Löhdefink
Nikhil Kapoor
Serin Varghese
Fabian Hüger
Peter Schlicht
Tim Fingscheidt
AAML
108
33
0
11 Jan 2021
Demystifying Deep Neural Networks Through Interpretation: A Survey
Demystifying Deep Neural Networks Through Interpretation: A Survey
Giang Dao
Minwoo Lee
FaML
FAtt
22
1
0
13 Dec 2020
Omni: Automated Ensemble with Unexpected Models against Adversarial
  Evasion Attack
Omni: Automated Ensemble with Unexpected Models against Adversarial Evasion Attack
Rui Shu
Tianpei Xia
Laurie A. Williams
Tim Menzies
AAML
32
15
0
23 Nov 2020
Adversarial Classification: Necessary conditions and geometric flows
Adversarial Classification: Necessary conditions and geometric flows
Nicolas García Trillos
Ryan W. Murray
AAML
34
19
0
21 Nov 2020
The Vulnerability of the Neural Networks Against Adversarial Examples in
  Deep Learning Algorithms
The Vulnerability of the Neural Networks Against Adversarial Examples in Deep Learning Algorithms
Rui Zhao
AAML
34
1
0
02 Nov 2020
GreedyFool: Distortion-Aware Sparse Adversarial Attack
GreedyFool: Distortion-Aware Sparse Adversarial Attack
Xiaoyi Dong
Dongdong Chen
Jianmin Bao
Chuan Qin
Lu Yuan
Weiming Zhang
Nenghai Yu
Dong Chen
AAML
18
63
0
26 Oct 2020
Optimism in the Face of Adversity: Understanding and Improving Deep
  Learning through Adversarial Robustness
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
29
48
0
19 Oct 2020
Uncovering the Limits of Adversarial Training against Norm-Bounded
  Adversarial Examples
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Sven Gowal
Chongli Qin
J. Uesato
Timothy A. Mann
Pushmeet Kohli
AAML
17
324
0
07 Oct 2020
Query complexity of adversarial attacks
Query complexity of adversarial attacks
Grzegorz Gluch
R. Urbanke
AAML
19
5
0
02 Oct 2020
An Empirical Study of DNNs Robustification Inefficacy in Protecting
  Visual Recommenders
An Empirical Study of DNNs Robustification Inefficacy in Protecting Visual Recommenders
Vito Walter Anelli
Tommaso Di Noia
Daniele Malitesta
Felice Antonio Merra
AAML
24
2
0
02 Oct 2020
Block-wise Image Transformation with Secret Key for Adversarially Robust
  Defense
Block-wise Image Transformation with Secret Key for Adversarially Robust Defense
Maungmaung Aprilpyone
Hitoshi Kiya
29
57
0
02 Oct 2020
Optimal Provable Robustness of Quantum Classification via Quantum
  Hypothesis Testing
Optimal Provable Robustness of Quantum Classification via Quantum Hypothesis Testing
Maurice Weber
Nana Liu
Bo-wen Li
Ce Zhang
Zhikuan Zhao
AAML
37
28
0
21 Sep 2020
Certifying Confidence via Randomized Smoothing
Certifying Confidence via Randomized Smoothing
Aounon Kumar
Alexander Levine
S. Feizi
Tom Goldstein
UQCV
33
39
0
17 Sep 2020
Adversarial Machine Learning in Image Classification: A Survey Towards
  the Defender's Perspective
Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective
G. R. Machado
Eugênio Silva
R. Goldschmidt
AAML
33
156
0
08 Sep 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
Yifan Jiang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
19
215
0
04 Sep 2020
Ramifications of Approximate Posterior Inference for Bayesian Deep
  Learning in Adversarial and Out-of-Distribution Settings
Ramifications of Approximate Posterior Inference for Bayesian Deep Learning in Adversarial and Out-of-Distribution Settings
John Mitros
A. Pakrashi
Brian Mac Namee
UQCV
26
2
0
03 Sep 2020
PermuteAttack: Counterfactual Explanation of Machine Learning Credit
  Scorecards
PermuteAttack: Counterfactual Explanation of Machine Learning Credit Scorecards
Masoud Hashemi
Ali Fathi
AAML
10
32
0
24 Aug 2020
Defending Adversarial Examples via DNN Bottleneck Reinforcement
Defending Adversarial Examples via DNN Bottleneck Reinforcement
Wenqing Liu
Miaojing Shi
Teddy Furon
Li Li
AAML
21
8
0
12 Aug 2020
Optimizing Information Loss Towards Robust Neural Networks
Optimizing Information Loss Towards Robust Neural Networks
Philip Sperl
Konstantin Böttinger
AAML
18
3
0
07 Aug 2020
Adversarial Examples on Object Recognition: A Comprehensive Survey
Adversarial Examples on Object Recognition: A Comprehensive Survey
A. Serban
E. Poll
Joost Visser
AAML
27
73
0
07 Aug 2020
Robust Machine Learning via Privacy/Rate-Distortion Theory
Robust Machine Learning via Privacy/Rate-Distortion Theory
Ye Wang
Shuchin Aeron
Adnan Siraj Rakin
T. Koike-Akino
P. Moulin
OOD
22
6
0
22 Jul 2020
Towards Visual Distortion in Black-Box Attacks
Towards Visual Distortion in Black-Box Attacks
Nannan Li
Zhenzhong Chen
25
12
0
21 Jul 2020
Robust Image Classification Using A Low-Pass Activation Function and DCT
  Augmentation
Robust Image Classification Using A Low-Pass Activation Function and DCT Augmentation
Md Tahmid Hossain
S. Teng
Ferdous Sohel
Guojun Lu
16
10
0
18 Jul 2020
Do Adversarially Robust ImageNet Models Transfer Better?
Do Adversarially Robust ImageNet Models Transfer Better?
Hadi Salman
Andrew Ilyas
Logan Engstrom
Ashish Kapoor
A. Madry
37
417
0
16 Jul 2020
Adversarial Attacks against Neural Networks in Audio Domain: Exploiting
  Principal Components
Adversarial Attacks against Neural Networks in Audio Domain: Exploiting Principal Components
Ken Alparslan
Yigit Can Alparslan
Matthew Burlick
AAML
21
8
0
14 Jul 2020
Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial
  Test Examples
Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial Test Examples
S. Goldwasser
Adam Tauman Kalai
Y. Kalai
Omar Montasser
AAML
19
38
0
10 Jul 2020
How benign is benign overfitting?
How benign is benign overfitting?
Amartya Sanyal
P. Dokania
Varun Kanade
Philip Torr
NoLa
AAML
23
57
0
08 Jul 2020
Deep Learning Defenses Against Adversarial Examples for Dynamic Risk
  Assessment
Deep Learning Defenses Against Adversarial Examples for Dynamic Risk Assessment
Xabier Echeberria-Barrio
Amaia Gil-Lerchundi
Ines Goicoechea-Telleria
Raul Orduna Urrutia
AAML
11
5
0
02 Jul 2020
Improving Calibration through the Relationship with Adversarial
  Robustness
Improving Calibration through the Relationship with Adversarial Robustness
Yao Qin
Xuezhi Wang
Alex Beutel
Ed H. Chi
AAML
40
25
0
29 Jun 2020
Beware the Black-Box: on the Robustness of Recent Defenses to
  Adversarial Examples
Beware the Black-Box: on the Robustness of Recent Defenses to Adversarial Examples
Kaleel Mahmood
Deniz Gurevin
Marten van Dijk
Phuong Ha Nguyen
AAML
14
22
0
18 Jun 2020
Tricking Adversarial Attacks To Fail
Tricking Adversarial Attacks To Fail
Blerta Lindqvist
AAML
8
0
0
08 Jun 2020
Adversarial Item Promotion: Vulnerabilities at the Core of Top-N
  Recommenders that Use Images to Address Cold Start
Adversarial Item Promotion: Vulnerabilities at the Core of Top-N Recommenders that Use Images to Address Cold Start
Zhuoran Liu
Martha Larson
DiffM
20
27
0
02 Jun 2020
Adversarial Classification via Distributional Robustness with
  Wasserstein Ambiguity
Adversarial Classification via Distributional Robustness with Wasserstein Ambiguity
Nam Ho-Nguyen
Stephen J. Wright
OOD
50
16
0
28 May 2020
Vulnerability of deep neural networks for detecting COVID-19 cases from
  chest X-ray images to universal adversarial attacks
Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks
Hokuto Hirano
K. Koga
Kazuhiro Takemoto
AAML
21
47
0
22 May 2020
PatchGuard: A Provably Robust Defense against Adversarial Patches via
  Small Receptive Fields and Masking
PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking
Chong Xiang
A. Bhagoji
Vikash Sehwag
Prateek Mittal
AAML
30
29
0
17 May 2020
Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder
Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder
Guanlin Li
Shuya Ding
Jun Luo
Chang-rui Liu
AAML
50
19
0
06 May 2020
Adversarial Training against Location-Optimized Adversarial Patches
Adversarial Training against Location-Optimized Adversarial Patches
Sukrut Rao
David Stutz
Bernt Schiele
AAML
19
91
0
05 May 2020
Towards Characterizing Adversarial Defects of Deep Learning Software
  from the Lens of Uncertainty
Towards Characterizing Adversarial Defects of Deep Learning Software from the Lens of Uncertainty
Xiyue Zhang
Xiaofei Xie
Lei Ma
Xiaoning Du
Q. Hu
Yang Liu
Jianjun Zhao
Meng Sun
AAML
16
76
0
24 Apr 2020
Advanced Evasion Attacks and Mitigations on Practical ML-Based Phishing
  Website Classifiers
Advanced Evasion Attacks and Mitigations on Practical ML-Based Phishing Website Classifiers
Yusi Lei
Sen Chen
Lingling Fan
Fu Song
Yang Liu
AAML
33
50
0
15 Apr 2020
Certifiable Robustness to Adversarial State Uncertainty in Deep
  Reinforcement Learning
Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
Michael Everett
Bjorn Lutjens
Jonathan P. How
AAML
13
41
0
11 Apr 2020
DaST: Data-free Substitute Training for Adversarial Attacks
DaST: Data-free Substitute Training for Adversarial Attacks
Mingyi Zhou
Jing Wu
Yipeng Liu
Shuaicheng Liu
Ce Zhu
22
142
0
28 Mar 2020
Systematic Evaluation of Privacy Risks of Machine Learning Models
Systematic Evaluation of Privacy Risks of Machine Learning Models
Liwei Song
Prateek Mittal
MIACV
196
359
0
24 Mar 2020
Adversarial Robustness on In- and Out-Distribution Improves
  Explainability
Adversarial Robustness on In- and Out-Distribution Improves Explainability
Maximilian Augustin
Alexander Meinke
Matthias Hein
OOD
75
98
0
20 Mar 2020
Breaking certified defenses: Semantic adversarial examples with spoofed
  robustness certificates
Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates
Amin Ghiasi
Ali Shafahi
Tom Goldstein
33
55
0
19 Mar 2020
Previous
12345678
Next