ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.00891
  4. Cited By
Interpretable Deep Learning under Fire

Interpretable Deep Learning under Fire

3 December 2018
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
    AAML
    AI4CE
ArXivPDFHTML

Papers citing "Interpretable Deep Learning under Fire"

29 / 29 papers shown
Title
Explainable Graph Neural Networks Under Fire
Explainable Graph Neural Networks Under Fire
Zhong Li
Simon Geisler
Yuhang Wang
Stephan Günnemann
M. Leeuwen
AAML
40
0
0
10 Jun 2024
From Attack to Defense: Insights into Deep Learning Security Measures in
  Black-Box Settings
From Attack to Defense: Insights into Deep Learning Security Measures in Black-Box Settings
Firuz Juraev
Mohammed Abuhamad
Eric Chan-Tin
George K. Thiruvathukal
Tamer Abuhmed
AAML
27
0
0
03 May 2024
Stability of Explainable Recommendation
Stability of Explainable Recommendation
Sairamvinay Vijayaraghavan
Prasant Mohapatra
AAML
38
1
0
03 May 2024
Are Classification Robustness and Explanation Robustness Really Strongly
  Correlated? An Analysis Through Input Loss Landscape
Are Classification Robustness and Explanation Robustness Really Strongly Correlated? An Analysis Through Input Loss Landscape
Tiejin Chen
Wenwang Huang
Linsey Pang
Dongsheng Luo
Hua Wei
OOD
41
0
0
09 Mar 2024
Single-Class Target-Specific Attack against Interpretable Deep Learning
  Systems
Single-Class Target-Specific Attack against Interpretable Deep Learning Systems
Eldor Abdukhamidov
Mohammed Abuhamad
George K. Thiruvathukal
Hyoungshick Kim
Tamer Abuhmed
AAML
25
2
0
12 Jul 2023
Valid P-Value for Deep Learning-Driven Salient Region
Valid P-Value for Deep Learning-Driven Salient Region
Daiki Miwa
Vo Nguyen Le Duy
I. Takeuchi
FAtt
AAML
24
14
0
06 Jan 2023
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
AAML
31
75
0
29 Dec 2022
Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial
  Perturbations against Interpretable Deep Learning
Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning
Eldor Abdukhamidov
Mohammed Abuhamad
Simon S. Woo
Eric Chan-Tin
Tamer Abuhmed
AAML
25
9
0
29 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A
  Survey
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
SoK: Explainable Machine Learning for Computer Security Applications
SoK: Explainable Machine Learning for Computer Security Applications
A. Nadeem
D. Vos
Clinton Cao
Luca Pajola
Simon Dieck
Robert Baumgartner
S. Verwer
29
40
0
22 Aug 2022
Efficiently Training Low-Curvature Neural Networks
Efficiently Training Low-Curvature Neural Networks
Suraj Srinivas
Kyle Matoba
Himabindu Lakkaraju
F. Fleuret
AAML
23
15
0
14 Jun 2022
Backdooring Explainable Machine Learning
Backdooring Explainable Machine Learning
Maximilian Noppel
Lukas Peter
Christian Wressnegger
AAML
14
5
0
20 Apr 2022
Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
19
1
0
30 Jan 2022
Low-Rank Constraints for Fast Inference in Structured Models
Low-Rank Constraints for Fast Inference in Structured Models
Justin T. Chiu
Yuntian Deng
Alexander M. Rush
BDL
29
13
0
08 Jan 2022
TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep
  Neural Network Systems
TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems
Bao Gia Doan
Minhui Xue
Shiqing Ma
Ehsan Abbasnejad
D. Ranasinghe
AAML
38
53
0
19 Nov 2021
DeepAID: Interpreting and Improving Deep Learning-based Anomaly
  Detection in Security Applications
DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications
Dongqi Han
Zhiliang Wang
Wenqi Chen
Ying Zhong
Su Wang
Han Zhang
Jiahai Yang
Xingang Shi
Xia Yin
AAML
16
76
0
23 Sep 2021
Jointly Attacking Graph Neural Network and its Explanations
Jointly Attacking Graph Neural Network and its Explanations
Wenqi Fan
Wei Jin
Xiaorui Liu
Han Xu
Xianfeng Tang
Suhang Wang
Qing Li
Jiliang Tang
Jianping Wang
Charu C. Aggarwal
AAML
39
28
0
07 Aug 2021
Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
  based Perception in Autonomous Driving Under Physical-World Attacks
Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion based Perception in Autonomous Driving Under Physical-World Attacks
Yulong Cao*
Ningfei Wang*
Chaowei Xiao
Dawei Yang
Jin Fang
Ruigang Yang
Qi Alfred Chen
Mingyan D. Liu
Bo-wen Li
AAML
24
217
0
17 Jun 2021
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural
  Backdoors
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors
Ren Pang
Zheng-Wei Zhang
Xiangshan Gao
Zhaohan Xi
S. Ji
Peng Cheng
Xiapu Luo
Ting Wang
AAML
27
31
0
16 Dec 2020
Debiased-CAM to mitigate image perturbations with faithful visual
  explanations of machine learning
Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
18
18
0
10 Dec 2020
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI)
  Interpretability through Neural Backdoors
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
26
93
0
22 Sep 2020
Model extraction from counterfactual explanations
Model extraction from counterfactual explanations
Ulrich Aivodji
Alexandre Bolot
Sébastien Gambs
MIACV
MLAU
27
51
0
03 Sep 2020
A simple defense against adversarial attacks on heatmap explanations
A simple defense against adversarial attacks on heatmap explanations
Laura Rieger
Lars Kai Hansen
FAtt
AAML
25
37
0
13 Jul 2020
Proper Network Interpretability Helps Adversarial Robustness in
  Classification
Proper Network Interpretability Helps Adversarial Robustness in Classification
Akhilan Boopathy
Sijia Liu
Gaoyuan Zhang
Cynthia Liu
Pin-Yu Chen
Shiyu Chang
Luca Daniel
AAML
FAtt
19
66
0
26 Jun 2020
Adversarial Infidelity Learning for Model Interpretation
Adversarial Infidelity Learning for Model Interpretation
Jian Liang
Bing Bai
Yuren Cao
Kun Bai
Fei-Yue Wang
AAML
44
18
0
09 Jun 2020
Deep Weakly-Supervised Learning Methods for Classification and
  Localization in Histology Images: A Survey
Deep Weakly-Supervised Learning Methods for Classification and Localization in Histology Images: A Survey
Jérôme Rony
Soufiane Belharbi
Jose Dolz
Ismail Ben Ayed
Luke McCaffrey
Eric Granger
25
70
0
08 Sep 2019
Interpreting Adversarial Examples by Activation Promotion and
  Suppression
Interpreting Adversarial Examples by Activation Promotion and Suppression
Kaidi Xu
Sijia Liu
Gaoyuan Zhang
Mengshu Sun
Pu Zhao
Quanfu Fan
Chuang Gan
X. Lin
AAML
FAtt
12
43
0
03 Apr 2019
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
317
11,681
0
09 Mar 2017
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
261
3,109
0
04 Nov 2016
1