ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.06083
  4. Cited By
Towards Deep Learning Models Resistant to Adversarial Attacks
v1v2v3v4 (latest)

Towards Deep Learning Models Resistant to Adversarial Attacks

19 June 2017
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
    SILMOOD
ArXiv (abs)PDFHTMLGithub (752★)

Papers citing "Towards Deep Learning Models Resistant to Adversarial Attacks"

50 / 7,067 papers shown
Neural Belief Reasoner
Neural Belief ReasonerInternational Joint Conference on Artificial Intelligence (IJCAI), 2019
Haifeng Qian
NAIBDL
148
1
0
10 Sep 2019
FDA: Feature Disruptive Attack
FDA: Feature Disruptive AttackIEEE International Conference on Computer Vision (ICCV), 2019
Aditya Ganeshan
S. VivekB.
R. Venkatesh Babu
AAML
262
131
0
10 Sep 2019
TBT: Targeted Neural Network Attack with Bit Trojan
TBT: Targeted Neural Network Attack with Bit TrojanComputer Vision and Pattern Recognition (CVPR), 2019
Adnan Siraj Rakin
Zhezhi He
Deliang Fan
AAML
316
243
0
10 Sep 2019
Learning to Disentangle Robust and Vulnerable Features for Adversarial
  Detection
Learning to Disentangle Robust and Vulnerable Features for Adversarial Detection
Byunggill Joe
Sung Ju Hwang
I. Shin
AAML
83
2
0
10 Sep 2019
BOSH: An Efficient Meta Algorithm for Decision-based Attacks
BOSH: An Efficient Meta Algorithm for Decision-based Attacks
Zhenxin Xiao
Puyudi Yang
Yuchen Eleanor Jiang
Kai-Wei Chang
Cho-Jui Hsieh
AAML
196
1
0
10 Sep 2019
Improving the Explainability of Neural Sentiment Classifiers via Data
  Augmentation
Improving the Explainability of Neural Sentiment Classifiers via Data Augmentation
Hanjie Chen
Yangfeng Ji
262
11
0
10 Sep 2019
Adversarial Robustness Against the Union of Multiple Perturbation Models
Adversarial Robustness Against the Union of Multiple Perturbation ModelsInternational Conference on Machine Learning (ICML), 2019
Pratyush Maini
Eric Wong
J. Zico Kolter
OODAAML
274
163
0
09 Sep 2019
When Explainability Meets Adversarial Learning: Detecting Adversarial
  Examples using SHAP Signatures
When Explainability Meets Adversarial Learning: Detecting Adversarial Examples using SHAP SignaturesIEEE International Joint Conference on Neural Network (IJCNN), 2019
Gil Fidel
Ron Bitton
A. Shabtai
FAttGAN
164
131
0
08 Sep 2019
On the Need for Topology-Aware Generative Models for Manifold-Based
  Defenses
On the Need for Topology-Aware Generative Models for Manifold-Based DefensesInternational Conference on Learning Representations (ICLR), 2019
Uyeong Jang
Susmit Jha
S. Jha
AAML
277
14
0
07 Sep 2019
Blackbox Attacks on Reinforcement Learning Agents Using Approximated
  Temporal Information
Blackbox Attacks on Reinforcement Learning Agents Using Approximated Temporal Information
Yiren Zhao
Ilia Shumailov
Han Cui
Xitong Gao
Robert D. Mullins
Ross J. Anderson
AAML
211
34
0
06 Sep 2019
Natural Adversarial Sentence Generation with Gradient-based Perturbation
Natural Adversarial Sentence Generation with Gradient-based Perturbation
Yu-Lun Hsieh
Minhao Cheng
Da-Cheng Juan
Wei Wei
W. Hsu
Cho-Jui Hsieh
AAML
107
2
0
06 Sep 2019
Are Adversarial Robustness and Common Perturbation Robustness
  Independent Attributes ?
Are Adversarial Robustness and Common Perturbation Robustness Independent Attributes ?
Alfred Laugros
A. Caplier
Matthieu Ospici
AAML
172
44
0
04 Sep 2019
Achieving Verified Robustness to Symbol Substitutions via Interval Bound
  Propagation
Achieving Verified Robustness to Symbol Substitutions via Interval Bound PropagationConference on Empirical Methods in Natural Language Processing (EMNLP), 2019
Po-Sen Huang
Robert Stanforth
Johannes Welbl
Chris Dyer
Dani Yogatama
Sven Gowal
Krishnamurthy Dvijotham
Pushmeet Kohli
AAML
220
174
0
03 Sep 2019
High Accuracy and High Fidelity Extraction of Neural Networks
High Accuracy and High Fidelity Extraction of Neural NetworksUSENIX Security Symposium (USENIX Security), 2019
Matthew Jagielski
Nicholas Carlini
David Berthelot
Alexey Kurakin
Nicolas Papernot
MLAUMIACV
338
427
0
03 Sep 2019
Certified Robustness to Adversarial Word Substitutions
Certified Robustness to Adversarial Word SubstitutionsConference on Empirical Methods in Natural Language Processing (EMNLP), 2019
Robin Jia
Aditi Raghunathan
Kerem Göksel
Abigail Z. Jacobs
AAML
513
322
0
03 Sep 2019
Metric Learning for Adversarial Robustness
Metric Learning for Adversarial RobustnessNeural Information Processing Systems (NeurIPS), 2019
Chengzhi Mao
Ziyuan Zhong
Junfeng Yang
Carl Vondrick
Baishakhi Ray
OOD
329
201
0
03 Sep 2019
Defeating Misclassification Attacks Against Transfer Learning
Defeating Misclassification Attacks Against Transfer LearningIEEE Transactions on Dependable and Secure Computing (TDSC), 2019
Bang Wu
Shuo Wang
Lizhen Qu
Cong Wang
Carsten Rudolph
Xiangwen Yang
AAML
148
7
0
29 Aug 2019
Deep Neural Network Ensembles against Deception: Ensemble Diversity,
  Accuracy and Robustness
Deep Neural Network Ensembles against Deception: Ensemble Diversity, Accuracy and RobustnessIEEE International Conference on Mobile Adhoc and Sensor Systems (MASS), 2019
Ling Liu
Wenqi Wei
Ka-Ho Chow
Margaret Loper
Emre Gursoy
Stacey Truex
Yanzhao Wu
UQCVAAMLFedML
154
68
0
29 Aug 2019
Adversarial Edit Attacks for Tree Data
Adversarial Edit Attacks for Tree Data
Benjamin Paassen
AAML
74
0
0
25 Aug 2019
Improving Adversarial Robustness via Attention and Adversarial Logit
  Pairing
Improving Adversarial Robustness via Attention and Adversarial Logit PairingFrontiers in Artificial Intelligence (FAI), 2019
Dou Goodman
Xingjian Li
Ji Liu
Jun Huan
Tao Wei
AAML
112
8
0
23 Aug 2019
AdvHat: Real-world adversarial attack on ArcFace Face ID system
AdvHat: Real-world adversarial attack on ArcFace Face ID systemInternational Conference on Pattern Recognition (ICPR), 2019
Stepan Alekseevich Komkov
Aleksandr Petiushko
AAMLCVBM
182
335
0
23 Aug 2019
Testing Robustness Against Unforeseen Adversaries
Testing Robustness Against Unforeseen Adversaries
Maximilian Kaufmann
Daniel Kang
Yi Sun
Steven Basart
Xuwang Yin
...
Adam Dziedzic
Franziska Boenisch
Tom B. Brown
Jacob Steinhardt
Dan Hendrycks
AAML
362
0
0
21 Aug 2019
Denoising and Verification Cross-Layer Ensemble Against Black-box
  Adversarial Attacks
Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks
Ka-Ho Chow
Wenqi Wei
Yanzhao Wu
Ling Liu
AAML
160
17
0
21 Aug 2019
Saccader: Improving Accuracy of Hard Attention Models for Vision
Saccader: Improving Accuracy of Hard Attention Models for VisionNeural Information Processing Systems (NeurIPS), 2019
Gamaleldin F. Elsayed
Simon Kornblith
Quoc V. Le
VLM
244
74
0
20 Aug 2019
Protecting Neural Networks with Hierarchical Random Switching: Towards
  Better Robustness-Accuracy Trade-off for Stochastic Defenses
Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic DefensesInternational Joint Conference on Artificial Intelligence (IJCAI), 2019
Tianlin Li
Siyue Wang
Pin-Yu Chen
Yanzhi Wang
Brian Kulis
Xue Lin
S. Chin
AAML
162
45
0
20 Aug 2019
Hybrid Batch Attacks: Finding Black-box Adversarial Examples with
  Limited Queries
Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited QueriesUSENIX Security Symposium (USENIX Security), 2019
Fnu Suya
Jianfeng Chi
David Evans
Yuan Tian
AAML
426
94
0
19 Aug 2019
Adversarial Defense by Suppressing High-frequency Components
Adversarial Defense by Suppressing High-frequency Components
Zhendong Zhang
Cheolkon Jung
X. Liang
197
27
0
19 Aug 2019
SPOCC: Scalable POssibilistic Classifier Combination -- toward robust
  aggregation of classifiers
SPOCC: Scalable POssibilistic Classifier Combination -- toward robust aggregation of classifiersExpert systems with applications (ESWA), 2019
Mahmoud Albardan
John Klein
O. Colot
186
5
0
18 Aug 2019
Implicit Deep Learning
Implicit Deep LearningSIAM Journal on Mathematics of Data Science (SIMODS), 2019
L. Ghaoui
Fangda Gu
Bertrand Travacca
Armin Askari
Alicia Y. Tsai
AI4CE
398
199
0
17 Aug 2019
Nesterov Accelerated Gradient and Scale Invariance for Adversarial
  Attacks
Nesterov Accelerated Gradient and Scale Invariance for Adversarial AttacksInternational Conference on Learning Representations (ICLR), 2019
Jiadong Lin
Chuanbiao Song
Kun He
Liwei Wang
John E. Hopcroft
AAML
693
710
0
17 Aug 2019
Adversarial shape perturbations on 3D point clouds
Adversarial shape perturbations on 3D point clouds
Daniel Liu
Ronald Yu
Hao Su
3DPC
228
12
0
16 Aug 2019
Convergence of Gradient Methods on Bilinear Zero-Sum Games
Convergence of Gradient Methods on Bilinear Zero-Sum GamesInternational Conference on Learning Representations (ICLR), 2019
Guojun Zhang
Yaoliang Yu
250
37
0
15 Aug 2019
AdvFaces: Adversarial Face Synthesis
AdvFaces: Adversarial Face Synthesis
Debayan Deb
Jianbang Zhang
Anil K. Jain
GANCVBMAAMLPICV
220
146
0
14 Aug 2019
Adversarial Neural Pruning with Latent Vulnerability Suppression
Adversarial Neural Pruning with Latent Vulnerability Suppression
Divyam Madaan
Jinwoo Shin
Sung Ju Hwang
AAML
217
3
0
12 Aug 2019
Defending Against Adversarial Iris Examples Using Wavelet Decomposition
Defending Against Adversarial Iris Examples Using Wavelet Decomposition
Sobhan Soleymani
Ali Dabouei
J. Dawson
Nasser M. Nasrabadi
AAML
171
9
0
08 Aug 2019
Universal Adversarial Audio Perturbations
Universal Adversarial Audio Perturbations
Sajjad Abdoli
L. G. Hafemann
Jérôme Rony
Ismail Ben Ayed
P. Cardinal
Alessandro Lameiras Koerich
AAML
330
59
0
08 Aug 2019
Robust Learning with Jacobian Regularization
Robust Learning with Jacobian Regularization
Judy Hoffman
Daniel A. Roberts
Sho Yaida
OODAAML
177
193
0
07 Aug 2019
Improved Adversarial Robustness by Reducing Open Space Risk via Tent
  Activations
Improved Adversarial Robustness by Reducing Open Space Risk via Tent Activations
Andras Rozsa
Terrance E. Boult
AAML
140
18
0
07 Aug 2019
BlurNet: Defense by Filtering the Feature Maps
BlurNet: Defense by Filtering the Feature Maps
Ravi Raju
Mikko H. Lipasti
AAML
181
17
0
06 Aug 2019
MetaAdvDet: Towards Robust Detection of Evolving Adversarial Attacks
MetaAdvDet: Towards Robust Detection of Evolving Adversarial AttacksACM Multimedia (ACM MM), 2019
Chen Ma
Chenxu Zhao
Hailin Shi
Li Chen
Junhai Yong
Dan Zeng
AAML
113
19
0
06 Aug 2019
A principled approach for generating adversarial images under non-smooth
  dissimilarity metrics
A principled approach for generating adversarial images under non-smooth dissimilarity metricsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2019
Aram-Alexandre Pooladian
Chris Finlay
Tim Hoheisel
Adam M. Oberman
AAML
172
3
0
05 Aug 2019
Adversarial Self-Defense for Cycle-Consistent GANs
Adversarial Self-Defense for Cycle-Consistent GANsNeural Information Processing Systems (NeurIPS), 2019
D. Bashkirova
Ben Usman
Kate Saenko
GAN
115
44
0
05 Aug 2019
Automated Detection System for Adversarial Examples with High-Frequency
  Noises Sieve
Automated Detection System for Adversarial Examples with High-Frequency Noises SieveInternational Conference on Cryptography and Security Systems (ICCSS), 2019
D. D. Thang
Toshihiro Matsui
AAML
92
4
0
05 Aug 2019
Exploring the Robustness of NMT Systems to Nonsensical Inputs
Exploring the Robustness of NMT Systems to Nonsensical Inputs
Akshay Chaturvedi
K. Abijith
Utpal Garain
AAML
168
12
0
03 Aug 2019
Robustifying deep networks for image segmentation
Robustifying deep networks for image segmentation
Zheng Liu
Jinnian Zhang
Varun Jog
Po-Ling Loh
A. McMillan
AAMLOOD
131
7
0
01 Aug 2019
Adversarial Test on Learnable Image Encryption
Adversarial Test on Learnable Image EncryptionGlobal Conference on Consumer Electronics (GCE), 2019
Maungmaung Aprilpyone
Warit Sirichotedumrong
Hitoshi Kiya
112
9
0
31 Jul 2019
Are Odds Really Odd? Bypassing Statistical Detection of Adversarial
  Examples
Are Odds Really Odd? Bypassing Statistical Detection of Adversarial Examples
Hossein Hosseini
Sreeram Kannan
Radha Poovendran
AAML
126
19
0
28 Jul 2019
Is BERT Really Robust? A Strong Baseline for Natural Language Attack on
  Text Classification and Entailment
Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and EntailmentAAAI Conference on Artificial Intelligence (AAAI), 2019
Di Jin
Zhijing Jin
Qiufeng Wang
Peter Szolovits
SILMAAML
724
1,255
0
27 Jul 2019
Understanding Adversarial Robustness: The Trade-off between Minimum and
  Average Margin
Understanding Adversarial Robustness: The Trade-off between Minimum and Average Margin
Kaiwen Wu
Yaoliang Yu
AAML
125
9
0
26 Jul 2019
Interpretability Beyond Classification Output: Semantic Bottleneck
  Networks
Interpretability Beyond Classification Output: Semantic Bottleneck Networks
M. Losch
Mario Fritz
Bernt Schiele
UQCV
222
70
0
25 Jul 2019
Previous
123...128129130...140141142
Next
Page 129 of 142
Pageof 142