ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1502.02590
  4. Cited By
Analysis of classifiers' robustness to adversarial perturbations

Analysis of classifiers' robustness to adversarial perturbations

9 February 2015
Alhussein Fawzi
Omar Fawzi
P. Frossard
    AAML
ArXivPDFHTML

Papers citing "Analysis of classifiers' robustness to adversarial perturbations"

39 / 39 papers shown
Title
A Survey of Neural Network Robustness Assessment in Image Recognition
A Survey of Neural Network Robustness Assessment in Image Recognition
Jie Wang
Jun Ai
Minyan Lu
Haoran Su
Dan Yu
Yutao Zhang
Junda Zhu
Jingyu Liu
AAML
28
3
0
12 Apr 2024
When are Local Queries Useful for Robust Learning?
When are Local Queries Useful for Robust Learning?
Pascale Gourdeau
Varun Kanade
Marta Z. Kwiatkowska
J. Worrell
OOD
27
1
0
12 Oct 2022
GAMA: Generative Adversarial Multi-Object Scene Attacks
GAMA: Generative Adversarial Multi-Object Scene Attacks
Abhishek Aich
Calvin-Khang Ta
Akash Gupta
Chengyu Song
S. Krishnamurthy
M. Salman Asif
A. Roy-Chowdhury
AAML
36
17
0
20 Sep 2022
An Efficient Method for Sample Adversarial Perturbations against
  Nonlinear Support Vector Machines
An Efficient Method for Sample Adversarial Perturbations against Nonlinear Support Vector Machines
Wen Su
Qingna Li
AAML
11
0
0
12 Jun 2022
Building Robust Ensembles via Margin Boosting
Building Robust Ensembles via Margin Boosting
Dinghuai Zhang
Hongyang R. Zhang
Aaron Courville
Yoshua Bengio
Pradeep Ravikumar
A. Suggala
AAML
UQCV
35
15
0
07 Jun 2022
Attacking and Defending Deep Reinforcement Learning Policies
Attacking and Defending Deep Reinforcement Learning Policies
Chao Wang
AAML
20
2
0
16 May 2022
Sample Complexity Bounds for Robustly Learning Decision Lists against
  Evasion Attacks
Sample Complexity Bounds for Robustly Learning Decision Lists against Evasion Attacks
Pascale Gourdeau
Varun Kanade
Marta Z. Kwiatkowska
J. Worrell
AAML
11
5
0
12 May 2022
The Effects of Regularization and Data Augmentation are Class Dependent
The Effects of Regularization and Data Augmentation are Class Dependent
Randall Balestriero
Léon Bottou
Yann LeCun
28
94
0
07 Apr 2022
Attack Transferability Characterization for Adversarially Robust
  Multi-label Classification
Attack Transferability Characterization for Adversarially Robust Multi-label Classification
Zhuo Yang
Yufei Han
Xiangliang Zhang
AAML
14
4
0
29 Jun 2021
A Unified Game-Theoretic Interpretation of Adversarial Robustness
A Unified Game-Theoretic Interpretation of Adversarial Robustness
Jie Ren
Die Zhang
Yisen Wang
Lu Chen
Zhanpeng Zhou
...
Xu Cheng
Xin Eric Wang
Meng Zhou
Jie Shi
Quanshi Zhang
AAML
64
22
0
12 Mar 2021
Achieving Adversarial Robustness Requires An Active Teacher
Achieving Adversarial Robustness Requires An Active Teacher
Chao Ma
Lexing Ying
19
1
0
14 Dec 2020
Recent Advances in Understanding Adversarial Robustness of Deep Neural
  Networks
Recent Advances in Understanding Adversarial Robustness of Deep Neural Networks
Tao Bai
Jinqi Luo
Jun Zhao
AAML
41
8
0
03 Nov 2020
Optimism in the Face of Adversity: Understanding and Improving Deep
  Learning through Adversarial Robustness
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
19
48
0
19 Oct 2020
Adversarial Examples on Object Recognition: A Comprehensive Survey
Adversarial Examples on Object Recognition: A Comprehensive Survey
A. Serban
E. Poll
Joost Visser
AAML
17
73
0
07 Aug 2020
Adversarial Attacks against Neural Networks in Audio Domain: Exploiting
  Principal Components
Adversarial Attacks against Neural Networks in Audio Domain: Exploiting Principal Components
Ken Alparslan
Yigit Can Alparslan
Matthew Burlick
AAML
11
8
0
14 Jul 2020
On Counterfactual Explanations under Predictive Multiplicity
On Counterfactual Explanations under Predictive Multiplicity
Martin Pawelczyk
Klaus Broelemann
Gjergji Kasneci
17
85
0
23 Jun 2020
Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data
Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data
Lu Wang
Huan Zhang
Jinfeng Yi
Cho-Jui Hsieh
Yuan Jiang
AAML
19
12
0
11 May 2020
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
  Adversarial Robustness
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Ahmadreza Jeddi
M. Shafiee
Michelle Karg
C. Scharfenberger
A. Wong
OOD
AAML
50
63
0
02 Mar 2020
Understanding and Quantifying Adversarial Examples Existence in Linear
  Classification
Understanding and Quantifying Adversarial Examples Existence in Linear Classification
Xupeng Shi
A. Ding
AAML
14
3
0
27 Oct 2019
Cross-Domain Transferability of Adversarial Perturbations
Cross-Domain Transferability of Adversarial Perturbations
Muzammal Naseer
Salman H. Khan
M. H. Khan
F. Khan
Fatih Porikli
AAML
17
9
0
28 May 2019
A Kernelized Manifold Mapping to Diminish the Effect of Adversarial
  Perturbations
A Kernelized Manifold Mapping to Diminish the Effect of Adversarial Perturbations
Saeid Asgari Taghanaki
Kumar Abhishek
Shekoofeh Azizi
Ghassan Hamarneh
AAML
21
40
0
03 Mar 2019
Stable and Fair Classification
Stable and Fair Classification
Lingxiao Huang
Nisheeth K. Vishnoi
FaML
19
71
0
21 Feb 2019
PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks
PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks
Jan Svoboda
Jonathan Masci
Federico Monti
M. Bronstein
Leonidas J. Guibas
AAML
GNN
33
41
0
31 May 2018
Laplacian Networks: Bounding Indicator Function Smoothness for Neural
  Network Robustness
Laplacian Networks: Bounding Indicator Function Smoothness for Neural Network Robustness
Carlos Lassance
Vincent Gripon
Antonio Ortega
AAML
11
16
0
24 May 2018
Robust GANs against Dishonest Adversaries
Robust GANs against Dishonest Adversaries
Zhi Xu
Chengtao Li
Stefanie Jegelka
AAML
26
3
0
27 Feb 2018
Security Analysis and Enhancement of Model Compressed Deep Learning
  Systems under Adversarial Attacks
Security Analysis and Enhancement of Model Compressed Deep Learning Systems under Adversarial Attacks
Qi Liu
Tao Liu
Zihao Liu
Yanzhi Wang
Yier Jin
Wujie Wen
AAML
27
48
0
14 Feb 2018
A3T: Adversarially Augmented Adversarial Training
A3T: Adversarially Augmented Adversarial Training
Akram Erraqabi
A. Baratin
Yoshua Bengio
Simon Lacoste-Julien
AAML
22
9
0
12 Jan 2018
Measuring the tendency of CNNs to Learn Surface Statistical Regularities
Measuring the tendency of CNNs to Learn Surface Statistical Regularities
Jason Jo
Yoshua Bengio
AAML
17
249
0
30 Nov 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
A. Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
27
11,825
0
19 Jun 2017
Robustness of classifiers to universal perturbations: a geometric
  perspective
Robustness of classifiers to universal perturbations: a geometric perspective
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
Omar Fawzi
P. Frossard
Stefano Soatto
AAML
19
118
0
26 May 2017
Parseval Networks: Improving Robustness to Adversarial Examples
Parseval Networks: Improving Robustness to Adversarial Examples
Moustapha Cissé
Piotr Bojanowski
Edouard Grave
Yann N. Dauphin
Nicolas Usunier
AAML
19
796
0
28 Apr 2017
The Space of Transferable Adversarial Examples
The Space of Transferable Adversarial Examples
Florian Tramèr
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick D. McDaniel
AAML
SILM
19
554
0
11 Apr 2017
Universal adversarial perturbations
Universal adversarial perturbations
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
Omar Fawzi
P. Frossard
AAML
22
2,509
0
26 Oct 2016
Safety Verification of Deep Neural Networks
Safety Verification of Deep Neural Networks
Xiaowei Huang
M. Kwiatkowska
Sen Wang
Min Wu
AAML
178
931
0
21 Oct 2016
Fine-grained Recognition in the Noisy Wild: Sensitivity Analysis of
  Convolutional Neural Networks Approaches
Fine-grained Recognition in the Noisy Wild: Sensitivity Analysis of Convolutional Neural Networks Approaches
E. Rodner
Marcel Simon
Robert B. Fisher
Joachim Denzler
13
39
0
21 Oct 2016
Towards Verified Artificial Intelligence
Towards Verified Artificial Intelligence
S. Seshia
Dorsa Sadigh
S. Shankar Sastry
13
203
0
27 Jun 2016
Exploring the Space of Adversarial Images
Exploring the Space of Adversarial Images
Pedro Tabacof
Eduardo Valle
AAML
12
191
0
19 Oct 2015
Improving Back-Propagation by Adding an Adversarial Gradient
Improving Back-Propagation by Adding an Adversarial Gradient
Arild Nøkland
AAML
19
32
0
14 Oct 2015
Evasion and Hardening of Tree Ensemble Classifiers
Evasion and Hardening of Tree Ensemble Classifiers
Alex Kantchelian
J. D. Tygar
A. Joseph
AAML
6
206
0
25 Sep 2015
1