ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.5068
  4. Cited By
Towards Deep Neural Network Architectures Robust to Adversarial Examples
v1v2v3v4 (latest)

Towards Deep Neural Network Architectures Robust to Adversarial Examples

International Conference on Learning Representations (ICLR), 2014
11 December 2014
S. Gu
Luca Rigazio
    AAML
ArXiv (abs)PDFHTML

Papers citing "Towards Deep Neural Network Architectures Robust to Adversarial Examples"

50 / 417 papers shown
ATRO: Adversarial Training with a Rejection Option
ATRO: Adversarial Training with a Rejection Option
Masahiro Kato
Zhenghang Cui
Yoshihiro Fukuhara
AAML
178
11
0
24 Oct 2020
Adversarial Robustness of Supervised Sparse Coding
Adversarial Robustness of Supervised Sparse Coding
Jeremias Sulam
Ramchandran Muthumukar
R. Arora
AAML
258
25
0
22 Oct 2020
Exploiting Vulnerabilities of Deep Learning-based Energy Theft Detection
  in AMI through Adversarial Attacks
Exploiting Vulnerabilities of Deep Learning-based Energy Theft Detection in AMI through Adversarial Attacks
Jiangnan Li
Yingyuan Yang
Jinyuan Stella Sun
AAML
205
9
0
16 Oct 2020
A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
  and Learning
A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning
Hongjun Wang
Guanbin Li
Xiaobai Liu
Liang Lin
GANAAML
216
26
0
15 Oct 2020
Increasing the Robustness of Semantic Segmentation Models with
  Painting-by-Numbers
Increasing the Robustness of Semantic Segmentation Models with Painting-by-Numbers
Christoph Kamann
Burkhard Güssefeld
Robin Hutmacher
J. H. Metzen
Carsten Rother
201
22
0
12 Oct 2020
Understanding Local Robustness of Deep Neural Networks under Natural
  Variations
Understanding Local Robustness of Deep Neural Networks under Natural Variations
Ziyuan Zhong
Yuchi Tian
Baishakhi Ray
AAML
181
1
0
09 Oct 2020
Downscaling Attack and Defense: Turning What You See Back Into What You
  Get
Downscaling Attack and Defense: Turning What You See Back Into What You Get
A. Lohn
AAML
81
3
0
06 Oct 2020
Do Wider Neural Networks Really Help Adversarial Robustness?
Do Wider Neural Networks Really Help Adversarial Robustness?Neural Information Processing Systems (NeurIPS), 2020
Boxi Wu
Jinghui Chen
Deng Cai
Xiaofei He
Quanquan Gu
AAML
408
105
0
03 Oct 2020
Adversarial Examples in Deep Learning for Multivariate Time Series
  Regression
Adversarial Examples in Deep Learning for Multivariate Time Series RegressionInternational Conference on Artificial Intelligence and Pattern Recognition (AIPR), 2020
Gautam Raj Mode
K. A. Hoque
AAMLAI4TS
142
68
0
24 Sep 2020
Robust Deep Learning Ensemble against Deception
Robust Deep Learning Ensemble against DeceptionIEEE Transactions on Dependable and Secure Computing (TDSC), 2020
Wenqi Wei
Ling Liu
AAML
147
29
0
14 Sep 2020
Adversarial Machine Learning in Image Classification: A Survey Towards
  the Defender's Perspective
Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's PerspectiveACM Computing Surveys (ACM CSUR), 2020
G. R. Machado
Eugênio Silva
R. Goldschmidt
AAML
256
183
0
08 Sep 2020
Improving Resistance to Adversarial Deformations by Regularizing
  Gradients
Improving Resistance to Adversarial Deformations by Regularizing GradientsNeurocomputing (Neurocomputing), 2020
Pengfei Xia
Bin Li
AAML
158
4
0
29 Aug 2020
Likelihood Landscapes: A Unifying Principle Behind Many Adversarial
  Defenses
Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses
Fu-Huei Lin
Rohit Mittapalli
Prithvijit Chattopadhyay
Daniel Bolya
Judy Hoffman
AAML
169
2
0
25 Aug 2020
Improving adversarial robustness of deep neural networks by using
  semantic information
Improving adversarial robustness of deep neural networks by using semantic information
Lina Wang
Rui Tang
Yawei Yue
Xingshu Chen
Wei Wang
Yi Zhu
Xuemei Zeng
AAML
205
17
0
18 Aug 2020
Defending Adversarial Examples via DNN Bottleneck Reinforcement
Defending Adversarial Examples via DNN Bottleneck ReinforcementACM Multimedia (ACM MM), 2020
Wenqing Liu
Miaojing Shi
Teddy Furon
Li Li
AAML
176
8
0
12 Aug 2020
Adversarial Examples on Object Recognition: A Comprehensive Survey
Adversarial Examples on Object Recognition: A Comprehensive SurveyACM Computing Surveys (ACM CSUR), 2020
A. Serban
E. Poll
Joost Visser
AAML
420
80
0
07 Aug 2020
vWitness: Certifying Web Page Interactions with Computer Vision
vWitness: Certifying Web Page Interactions with Computer VisionDependable Systems and Networks (DSN), 2020
Shuang He
Lianying Zhao
David Lie
120
1
0
31 Jul 2020
Technologies for Trustworthy Machine Learning: A Survey in a
  Socio-Technical Context
Technologies for Trustworthy Machine Learning: A Survey in a Socio-Technical Context
Ehsan Toreini
Mhairi Aitken
Kovila P. L. Coopamootoo
Karen Elliott
Vladimiro González-Zelaya
P. Missier
Magdalene Ng
Aad van Moorsel
312
19
0
17 Jul 2020
Robustifying Reinforcement Learning Agents via Action Space Adversarial
  Training
Robustifying Reinforcement Learning Agents via Action Space Adversarial TrainingAmerican Control Conference (ACC), 2020
Kai Liang Tan
Yasaman Esfandiari
Xian Yeow Lee
Aakanksha
Soumik Sarkar
AAML
223
67
0
14 Jul 2020
Adversarial Attacks against Neural Networks in Audio Domain: Exploiting
  Principal Components
Adversarial Attacks against Neural Networks in Audio Domain: Exploiting Principal Components
Ken Alparslan
Yigit Can Alparslan
Matthew Burlick
AAML
137
9
0
14 Jul 2020
Fast Training of Deep Neural Networks Robust to Adversarial
  Perturbations
Fast Training of Deep Neural Networks Robust to Adversarial PerturbationsIEEE Conference on High Performance Extreme Computing (HPEC), 2020
Justin A. Goodwin
Olivia M. Brown
Victoria Helus
OODAAML
93
3
0
08 Jul 2020
Learning while Respecting Privacy and Robustness to Distributional
  Uncertainties and Adversarial Data
Learning while Respecting Privacy and Robustness to Distributional Uncertainties and Adversarial Data
A. Sadeghi
Gang Wang
Meng Ma
G. Giannakis
OODFedML
111
4
0
07 Jul 2020
Understanding and Improving Fast Adversarial Training
Understanding and Improving Fast Adversarial Training
Maksym Andriushchenko
Nicolas Flammarion
AAML
339
331
0
06 Jul 2020
Deep Learning Defenses Against Adversarial Examples for Dynamic Risk
  Assessment
Deep Learning Defenses Against Adversarial Examples for Dynamic Risk Assessment
Xabier Echeberria-Barrio
Amaia Gil-Lerchundi
Ines Goicoechea-Telleria
Raul Orduna Urrutia
AAML
145
5
0
02 Jul 2020
Opportunities and Challenges in Deep Learning Adversarial Robustness: A
  Survey
Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey
S. Silva
Peyman Najafirad
AAMLOOD
343
150
0
01 Jul 2020
ConFoc: Content-Focus Protection Against Trojan Attacks on Neural
  Networks
ConFoc: Content-Focus Protection Against Trojan Attacks on Neural Networks
Miguel Villarreal-Vasquez
B. Bhargava
AAML
177
41
0
01 Jul 2020
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?
Kaidi Jin
Tianwei Zhang
Chao Shen
Yufei Chen
Ming Fan
Chenhao Lin
Ting Liu
AAML
103
16
0
26 Jun 2020
Orthogonal Deep Models As Defense Against Black-Box Attacks
Orthogonal Deep Models As Defense Against Black-Box Attacks
M. Jalwana
Naveed Akhtar
Bennamoun
Lin Wang
AAML
190
11
0
26 Jun 2020
Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial
  Robustness
Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial RobustnessMachine-mediated learning (ML), 2020
Jiabo He
Linxi Jiang
Hanxun Huang
Zejia Weng
James Bailey
Yu-Gang Jiang
AAML
284
11
0
24 Jun 2020
Counterexample-Guided Learning of Monotonic Neural Networks
Counterexample-Guided Learning of Monotonic Neural Networks
Aishwarya Sivaraman
G. Farnadi
T. Millstein
Karen Ullrich
184
62
0
16 Jun 2020
Defensive Approximation: Securing CNNs using Approximate Computing
Defensive Approximation: Securing CNNs using Approximate ComputingInternational Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), 2020
Amira Guesmi
Ihsen Alouani
Khaled N. Khasawneh
M. Baklouti
T. Frikha
Mohamed Abid
Nael B. Abu-Ghazaleh
AAML
202
43
0
13 Jun 2020
Towards Robust Pattern Recognition: A Review
Towards Robust Pattern Recognition: A ReviewProceedings of the IEEE (Proc. IEEE), 2020
Xu-Yao Zhang
Cheng-Lin Liu
C. Suen
OODHAI
224
126
0
12 Jun 2020
Calibrated Surrogate Losses for Adversarially Robust Classification
Calibrated Surrogate Losses for Adversarially Robust ClassificationAnnual Conference Computational Learning Theory (COLT), 2020
Han Bao
Clayton Scott
Masashi Sugiyama
236
47
0
28 May 2020
Mitigating Advanced Adversarial Attacks with More Advanced Gradient
  Obfuscation Techniques
Mitigating Advanced Adversarial Attacks with More Advanced Gradient Obfuscation Techniques
Han Qiu
Yi Zeng
Qinkai Zheng
Tianwei Zhang
Meikang Qiu
G. Memmi
AAML
136
14
0
27 May 2020
Enhancing Resilience of Deep Learning Networks by Means of Transferable
  Adversaries
Enhancing Resilience of Deep Learning Networks by Means of Transferable AdversariesIEEE International Joint Conference on Neural Network (IJCNN), 2020
M. Seiler
Heike Trautmann
P. Kerschke
AAML
88
0
0
27 May 2020
Stable and expressive recurrent vision models
Stable and expressive recurrent vision models
Drew Linsley
A. Ashok
L. Govindarajan
Rex G Liu
Thomas Serre
324
52
0
22 May 2020
Adversarial Weight Perturbation Helps Robust Generalization
Adversarial Weight Perturbation Helps Robust Generalization
Dongxian Wu
Shutao Xia
Yisen Wang
OODAAML
228
18
0
13 Apr 2020
Feature Partitioning for Robust Tree Ensembles and their Certification
  in Adversarial Scenarios
Feature Partitioning for Robust Tree Ensembles and their Certification in Adversarial ScenariosEURASIP Journal on Information Security (EURASIP J. Inf. Secur.), 2020
Stefano Calzavara
Claudio Lucchese
Federico Marcuzzi
S. Orlando
AAML
193
10
0
07 Apr 2020
Challenging the adversarial robustness of DNNs based on error-correcting
  output codes
Challenging the adversarial robustness of DNNs based on error-correcting output codes
Bowen Zhang
B. Tondi
Xixiang Lv
Mauro Barni
AAML
80
2
0
26 Mar 2020
Architectural Resilience to Foreground-and-Background Adversarial Noise
Architectural Resilience to Foreground-and-Background Adversarial Noise
Carl Cheng
Evan Hu
AAML
131
0
0
23 Mar 2020
Toward Adversarial Robustness via Semi-supervised Robust Training
Toward Adversarial Robustness via Semi-supervised Robust Training
Yiming Li
Baoyuan Wu
Yan Feng
Yanbo Fan
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
283
13
0
16 Mar 2020
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic
  Segmentation
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic SegmentationIEEE International Conference on Computer Vision (ICCV), 2020
Xiaohan Li
Hengshuang Zhao
Jiaya Jia
AAML
192
46
0
14 Mar 2020
Search Space of Adversarial Perturbations against Image Filters
Search Space of Adversarial Perturbations against Image FiltersInternational Journal of Advanced Computer Science and Applications (IJACSA), 2020
D. D. Thang
Toshihiro Matsui
AAML
106
1
0
05 Mar 2020
Denoised Smoothing: A Provable Defense for Pretrained Classifiers
Denoised Smoothing: A Provable Defense for Pretrained Classifiers
Hadi Salman
Mingjie Sun
Greg Yang
Ashish Kapoor
J. Zico Kolter
229
23
0
04 Mar 2020
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
  Adversarial Robustness
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial RobustnessComputer Vision and Pattern Recognition (CVPR), 2020
Ahmadreza Jeddi
M. Shafiee
Michelle Karg
C. Scharfenberger
A. Wong
OODAAML
378
72
0
02 Mar 2020
Gödel's Sentence Is An Adversarial Example But Unsolvable
Gödel's Sentence Is An Adversarial Example But Unsolvable
Xiaodong Qi
Lansheng Han
AAML
161
0
0
25 Feb 2020
Adversarial Perturbations Prevail in the Y-Channel of the YCbCr Color
  Space
Adversarial Perturbations Prevail in the Y-Channel of the YCbCr Color Space
Camilo Pestana
Naveed Akhtar
Wei Liu
D. Glance
Lin Wang
AAML
146
10
0
25 Feb 2020
Mitigating Class Boundary Label Uncertainty to Reduce Both Model Bias
  and Variance
Mitigating Class Boundary Label Uncertainty to Reduce Both Model Bias and VarianceACM Transactions on Knowledge Discovery from Data (TKDD), 2020
Matthew Almeida
Wei Ding
S. Crouter
Ping Chen
112
14
0
23 Feb 2020
More Data Can Expand the Generalization Gap Between Adversarially Robust
  and Standard Models
More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard ModelsInternational Conference on Machine Learning (ICML), 2020
Lin Chen
Yifei Min
Mingrui Zhang
Amin Karbasi
OOD
301
66
0
11 Feb 2020
Adversarial Data Encryption
Adversarial Data Encryption
Yingdong Hu
Liang Zhang
W. Shan
Xiaoxiao Qin
Jinghuai Qi
Zhenzhou Wu
Yang Yuan
FedMLMedIm
128
0
0
10 Feb 2020
Previous
123456789
Next
Page 4 of 9