ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.10686
  4. Cited By
Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial
  Examples

Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples

30 May 2017
Weilin Xu
David Evans
Yanjun Qi
    AAML
ArXiv (abs)PDFHTML

Papers citing "Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples"

20 / 20 papers shown
Resilience from Diversity: Population-based approach to harden models
  against adversarial attacks
Resilience from Diversity: Population-based approach to harden models against adversarial attacks
Jasser Jasser
Ivan I. Garibay
AAML
172
2
0
19 Nov 2021
Security and Privacy for Artificial Intelligence: Opportunities and
  Challenges
Security and Privacy for Artificial Intelligence: Opportunities and Challenges
Ayodeji Oseni
Nour Moustafa
Helge Janicke
Peng Liu
Z. Tari
A. Vasilakos
AAML
166
65
0
09 Feb 2021
Progressive Defense Against Adversarial Attacks for Deep Learning as a
  Service in Internet of Things
Progressive Defense Against Adversarial Attacks for Deep Learning as a Service in Internet of Things
Ling Wang
Cheng Zhang
Zejian Luo
Chenguang Liu
Jie Liu
Xi Zheng
A. Vasilakos
AAML
133
4
0
15 Oct 2020
Decoder-free Robustness Disentanglement without (Additional) Supervision
Decoder-free Robustness Disentanglement without (Additional) Supervision
Yifei Wang
Dan Peng
Furui Liu
Zhenguo Li
Zhitang Chen
Jiansheng Yang
AAML
129
1
0
02 Jul 2020
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
  Adversarial Robustness
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial RobustnessComputer Vision and Pattern Recognition (CVPR), 2020
Ahmadreza Jeddi
M. Shafiee
Michelle Karg
C. Scharfenberger
A. Wong
OODAAML
367
72
0
02 Mar 2020
Randomization matters. How to defend against strong adversarial attacks
Randomization matters. How to defend against strong adversarial attacksInternational Conference on Machine Learning (ICML), 2020
Rafael Pinot
Raphael Ettedgui
Geovani Rizk
Y. Chevaleyre
Jamal Atif
AAML
279
62
0
26 Feb 2020
DLA: Dense-Layer-Analysis for Adversarial Example Detection
DLA: Dense-Layer-Analysis for Adversarial Example DetectionEuropean Symposium on Security and Privacy (EuroS&P), 2019
Philip Sperl
Ching-yu Kao
Peng Chen
Konstantin Böttinger
AAML
125
36
0
05 Nov 2019
Recovery Guarantees for Compressible Signals with Adversarial Noise
Recovery Guarantees for Compressible Signals with Adversarial Noise
J. Dhaliwal
Kyle Hambrook
AAML
161
2
0
15 Jul 2019
Moving Target Defense for Deep Visual Sensing against Adversarial
  Examples
Moving Target Defense for Deep Visual Sensing against Adversarial ExamplesACM International Conference on Embedded Networked Sensor Systems (SenSys), 2019
Qun Song
Zhenyu Yan
Rui Tan
AAML
123
26
0
11 May 2019
The Faults in Our Pi Stars: Security Issues and Open Challenges in Deep
  Reinforcement Learning
The Faults in Our Pi Stars: Security Issues and Open Challenges in Deep Reinforcement Learning
Vahid Behzadan
Arslan Munir
155
27
0
23 Oct 2018
Adversarial Perturbations Against Real-Time Video Classification Systems
Adversarial Perturbations Against Real-Time Video Classification SystemsNetwork and Distributed System Security Symposium (NDSS), 2018
Shasha Li
Ajaya Neupane
S. Paul
Chengyu Song
S. Krishnamurthy
Amit K. Roy-Chowdhury
A. Swami
AAML
221
129
0
02 Jul 2018
Clipping free attacks against artificial neural networks
Clipping free attacks against artificial neural networks
B. Addad
Jérôme Kodjabachian
Christophe Meyer
AAML
66
1
0
26 Mar 2018
On Lyapunov exponents and adversarial perturbation
On Lyapunov exponents and adversarial perturbation
Vinay Uday Prabhu
Nishant Desai
John Whaley
AAML
94
7
0
20 Feb 2018
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A
  Survey
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
Naveed Akhtar
Lin Wang
AAML
495
1,993
0
02 Jan 2018
PixelDefend: Leveraging Generative Models to Understand and Defend
  against Adversarial Examples
PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial ExamplesInternational Conference on Learning Representations (ICLR), 2017
Yang Song
Taesup Kim
Sebastian Nowozin
Stefano Ermon
Nate Kushman
AAML
406
823
0
30 Oct 2017
Detecting Adversarial Attacks on Neural Network Policies with Visual
  Foresight
Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight
Yen-Chen Lin
Ming-Yuan Liu
Min Sun
Jia-Bin Huang
AAML
240
53
0
02 Oct 2017
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural
  Networks without Training Substitute Models
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Pin-Yu Chen
Huan Zhang
Yash Sharma
Jinfeng Yi
Cho-Jui Hsieh
AAML
512
2,085
0
14 Aug 2017
Efficient Defenses Against Adversarial Attacks
Efficient Defenses Against Adversarial Attacks
Valentina Zantedeschi
Maria-Irina Nicolae
Ambrish Rawat
AAML
282
311
0
21 Jul 2017
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong
Warren He
James Wei
Xinyun Chen
Nicholas Carlini
Basel Alomair
AAML
191
242
0
15 Jun 2017
Feature Squeezing: Detecting Adversarial Examples in Deep Neural
  Networks
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
Weilin Xu
David Evans
Yanjun Qi
AAML
277
1,506
0
04 Apr 2017
1
Page 1 of 1