Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2211.15926
Cited By
Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning
29 November 2022
Eldor Abdukhamidov
Mohammed Abuhamad
Simon S. Woo
Eric Chan-Tin
Tamer Abuhmed
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning"
6 / 6 papers shown
Title
Impact of Architectural Modifications on Deep Learning Adversarial Robustness
Firuz Juraev
Mohammed Abuhamad
Simon S. Woo
George K Thiruvathukal
Tamer Abuhmed
AAML
30
0
0
03 May 2024
Unveiling Vulnerabilities in Interpretable Deep Learning Systems with Query-Efficient Black-box Attacks
Eldor Abdukhamidov
Mohammed Abuhamad
Simon S. Woo
Eric Chan-Tin
Tamer Abuhmed
AAML
14
2
0
21 Jul 2023
Microbial Genetic Algorithm-based Black-box Attack against Interpretable Deep Learning Systems
Eldor Abdukhamidov
Mohammed Abuhamad
Simon S. Woo
Eric Chan-Tin
Tamer Abuhmed
AAML
19
1
0
13 Jul 2023
Single-Class Target-Specific Attack against Interpretable Deep Learning Systems
Eldor Abdukhamidov
Mohammed Abuhamad
George K. Thiruvathukal
Hyoungshick Kim
Tamer Abuhmed
AAML
25
2
0
12 Jul 2023
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
L. V. D. van der Maaten
Kilian Q. Weinberger
PINN
3DV
252
36,362
0
25 Aug 2016
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
281
5,835
0
08 Jul 2016
1