ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.10547
  4. Cited By
Interpretation of Neural Networks is Fragile

Interpretation of Neural Networks is Fragile

29 October 2017
Amirata Ghorbani
Abubakar Abid
James Y. Zou
    FAtt
    AAML
ArXivPDFHTML

Papers citing "Interpretation of Neural Networks is Fragile"

50 / 467 papers shown
Title
sMRI-PatchNet: A novel explainable patch-based deep learning network for
  Alzheimer's disease diagnosis and discriminative atrophy localisation with
  Structural MRI
sMRI-PatchNet: A novel explainable patch-based deep learning network for Alzheimer's disease diagnosis and discriminative atrophy localisation with Structural MRI
Xin Zhang
Liangxiu Han
Lianghao Han
Haoming Chen
Darren Dancey
Daoqiang Zhang
MedIm
13
4
0
17 Feb 2023
GCI: A (G)raph (C)oncept (I)nterpretation Framework
GCI: A (G)raph (C)oncept (I)nterpretation Framework
Dmitry Kazhdan
B. Dimanov
Lucie Charlotte Magister
Pietro Barbiero
M. Jamnik
Pietro Lio'
21
3
0
09 Feb 2023
Diagnosing and Rectifying Vision Models using Language
Diagnosing and Rectifying Vision Models using Language
Yuhui Zhang
Jeff Z. HaoChen
Shih-Cheng Huang
Kuan-Chieh Jackson Wang
James Y. Zou
Serena Yeung
20
43
0
08 Feb 2023
Certified Interpretability Robustness for Class Activation Mapping
Certified Interpretability Robustness for Class Activation Mapping
Alex Gu
Tsui-Wei Weng
Pin-Yu Chen
Sijia Liu
Lucani E. Daniel
AAML
21
2
0
26 Jan 2023
Explainable AI does not provide the explanations end-users are asking
  for
Explainable AI does not provide the explanations end-users are asking for
Savio Rozario
G. Cevora
XAI
20
0
0
25 Jan 2023
MoreauGrad: Sparse and Robust Interpretation of Neural Networks via
  Moreau Envelope
MoreauGrad: Sparse and Robust Interpretation of Neural Networks via Moreau Envelope
Jingwei Zhang
Farzan Farnia
UQCV
31
3
0
08 Jan 2023
Valid P-Value for Deep Learning-Driven Salient Region
Valid P-Value for Deep Learning-Driven Salient Region
Daiki Miwa
Vo Nguyen Le Duy
I. Takeuchi
FAtt
AAML
26
14
0
06 Jan 2023
PEAK: Explainable Privacy Assistant through Automated Knowledge
  Extraction
PEAK: Explainable Privacy Assistant through Automated Knowledge Extraction
Gonul Ayci
Arzucan Özgür
Murat cSensoy
P. Yolum
19
1
0
05 Jan 2023
Disentangled Explanations of Neural Network Predictions by Finding
  Relevant Subspaces
Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces
Pattarawat Chormai
J. Herrmann
Klaus-Robert Muller
G. Montavon
FAtt
45
17
0
30 Dec 2022
Provable Robust Saliency-based Explanations
Provable Robust Saliency-based Explanations
Chao Chen
Chenghua Guo
Guixiang Ma
Ming Zeng
Xi Zhang
Sihong Xie
AAML
FAtt
33
0
0
28 Dec 2022
On the Equivalence of the Weighted Tsetlin Machine and the Perceptron
On the Equivalence of the Weighted Tsetlin Machine and the Perceptron
Jivitesh Sharma
Ole-Christoffer Granmo
Lei Jiao
30
1
0
27 Dec 2022
The Quantum Path Kernel: a Generalized Quantum Neural Tangent Kernel for
  Deep Quantum Machine Learning
The Quantum Path Kernel: a Generalized Quantum Neural Tangent Kernel for Deep Quantum Machine Learning
Massimiliano Incudini
Michele Grossi
Antonio Mandarino
S. Vallecorsa
Alessandra Di Pierro
David Windridge
31
6
0
22 Dec 2022
AI Security for Geoscience and Remote Sensing: Challenges and Future
  Trends
AI Security for Geoscience and Remote Sensing: Challenges and Future Trends
Yonghao Xu
Tao Bai
Weikang Yu
Shizhen Chang
P. M. Atkinson
Pedram Ghamisi
AAML
38
47
0
19 Dec 2022
Estimating the Adversarial Robustness of Attributions in Text with
  Transformers
Estimating the Adversarial Robustness of Attributions in Text with Transformers
Adam Ivankay
Mattia Rigotti
Ivan Girardi
Chiara Marchiori
P. Frossard
12
1
0
18 Dec 2022
Robust Explanation Constraints for Neural Networks
Robust Explanation Constraints for Neural Networks
Matthew Wicker
Juyeon Heo
Luca Costabello
Adrian Weller
FAtt
23
17
0
16 Dec 2022
Interpretable ML for Imbalanced Data
Interpretable ML for Imbalanced Data
Damien Dablain
C. Bellinger
Bartosz Krawczyk
D. Aha
Nitesh V. Chawla
24
1
0
15 Dec 2022
Identifying the Source of Vulnerability in Explanation Discrepancy: A
  Case Study in Neural Text Classification
Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification
Ruixuan Tang
Hanjie Chen
Yangfeng Ji
AAML
FAtt
24
2
0
10 Dec 2022
Spurious Features Everywhere -- Large-Scale Detection of Harmful
  Spurious Features in ImageNet
Spurious Features Everywhere -- Large-Scale Detection of Harmful Spurious Features in ImageNet
Yannic Neuhaus
Maximilian Augustin
Valentyn Boreiko
Matthias Hein
AAML
36
30
0
09 Dec 2022
Post hoc Explanations may be Ineffective for Detecting Unknown Spurious
  Correlation
Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation
Julius Adebayo
M. Muelly
H. Abelson
Been Kim
16
86
0
09 Dec 2022
XRand: Differentially Private Defense against Explanation-Guided Attacks
XRand: Differentially Private Defense against Explanation-Guided Attacks
Truc D. T. Nguyen
Phung Lai
Nhathai Phan
My T. Thai
AAML
SILM
22
14
0
08 Dec 2022
Interpretation of Neural Networks is Susceptible to Universal
  Adversarial Perturbations
Interpretation of Neural Networks is Susceptible to Universal Adversarial Perturbations
Haniyeh Ehsani Oskouie
Farzan Farnia
FAtt
AAML
14
5
0
30 Nov 2022
Understanding and Enhancing Robustness of Concept-based Models
Understanding and Enhancing Robustness of Concept-based Models
Sanchit Sinha
Mengdi Huai
Jianhui Sun
Aidong Zhang
AAML
25
18
0
29 Nov 2022
Towards More Robust Interpretation via Local Gradient Alignment
Towards More Robust Interpretation via Local Gradient Alignment
Sunghwan Joo
Seokhyeon Jeong
Juyeon Heo
Adrian Weller
Taesup Moon
FAtt
27
5
0
29 Nov 2022
Foiling Explanations in Deep Neural Networks
Foiling Explanations in Deep Neural Networks
Snir Vitrack Tamam
Raz Lapid
Moshe Sipper
AAML
21
17
0
27 Nov 2022
SEAT: Stable and Explainable Attention
SEAT: Stable and Explainable Attention
Lijie Hu
Yixin Liu
Ninghao Liu
Mengdi Huai
Lichao Sun
Di Wang
OOD
26
18
0
23 Nov 2022
Concept-based Explanations using Non-negative Concept Activation Vectors
  and Decision Tree for CNN Models
Concept-based Explanations using Non-negative Concept Activation Vectors and Decision Tree for CNN Models
Gayda Mutahar
Tim Miller
FAtt
24
6
0
19 Nov 2022
Data-Adaptive Discriminative Feature Localization with Statistically
  Guaranteed Interpretation
Data-Adaptive Discriminative Feature Localization with Statistically Guaranteed Interpretation
Ben Dai
Xiaotong Shen
Lingzhi Chen
Chunlin Li
Wei Pan
FAtt
16
1
0
18 Nov 2022
CRAFT: Concept Recursive Activation FacTorization for Explainability
CRAFT: Concept Recursive Activation FacTorization for Explainability
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
19
102
0
17 Nov 2022
Explainer Divergence Scores (EDS): Some Post-Hoc Explanations May be
  Effective for Detecting Unknown Spurious Correlations
Explainer Divergence Scores (EDS): Some Post-Hoc Explanations May be Effective for Detecting Unknown Spurious Correlations
Shea Cardozo
Gabriel Islas Montero
Dmitry Kazhdan
B. Dimanov
Maleakhi A. Wijaya
M. Jamnik
Pietro Lio'
AAML
20
0
0
14 Nov 2022
What Makes a Good Explanation?: A Harmonized View of Properties of
  Explanations
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
33
18
0
10 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A
  Survey
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
Calibration Meets Explanation: A Simple and Effective Approach for Model
  Confidence Estimates
Calibration Meets Explanation: A Simple and Effective Approach for Model Confidence Estimates
Dongfang Li
Baotian Hu
Qingcai Chen
11
8
0
06 Nov 2022
SoK: Modeling Explainability in Security Analytics for Interpretability,
  Trustworthiness, and Usability
SoK: Modeling Explainability in Security Analytics for Interpretability, Trustworthiness, and Usability
Dipkamal Bhusal
Rosalyn Shin
Ajay Ashok Shewale
M. K. Veerabhadran
Michael Clifford
Sara Rampazzi
Nidhi Rastogi
FAtt
AAML
32
5
0
31 Oct 2022
BOREx: Bayesian-Optimization--Based Refinement of Saliency Map for
  Image- and Video-Classification Models
BOREx: Bayesian-Optimization--Based Refinement of Saliency Map for Image- and Video-Classification Models
Atsushi Kikuchi
Kotaro Uchida
Masaki Waga
Kohei Suenaga
FAtt
24
1
0
31 Oct 2022
Safety Verification for Neural Networks Based on Set-boundary Analysis
Safety Verification for Neural Networks Based on Set-boundary Analysis
Zhen Liang
Dejin Ren
Wanwei Liu
Ji Wang
Wenjing Yang
Bai Xue
AAML
34
4
0
09 Oct 2022
Boundary-Aware Uncertainty for Feature Attribution Explainers
Boundary-Aware Uncertainty for Feature Attribution Explainers
Davin Hill
A. Masoomi
Max Torop
S. Ghimire
Jennifer Dy
FAtt
55
3
0
05 Oct 2022
Ensembling improves stability and power of feature selection for deep
  learning models
Ensembling improves stability and power of feature selection for deep learning models
P. Gyawali
Xiaoxia Liu
James Y. Zou
Zihuai He
OOD
FedML
32
6
0
02 Oct 2022
Towards Human-Compatible XAI: Explaining Data Differentials with Concept
  Induction over Background Knowledge
Towards Human-Compatible XAI: Explaining Data Differentials with Concept Induction over Background Knowledge
Cara L. Widmer
Md Kamruzzaman Sarker
Srikanth Nadella
Joshua L. Fiechter
I. Juvina
B. Minnery
Pascal Hitzler
Joshua Schwartz
M. Raymer
32
7
0
27 Sep 2022
Towards Faithful Model Explanation in NLP: A Survey
Towards Faithful Model Explanation in NLP: A Survey
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
XAI
109
107
0
22 Sep 2022
Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off
Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off
M. Zarlenga
Pietro Barbiero
Gabriele Ciravegna
G. Marra
Francesco Giannini
...
F. Precioso
S. Melacci
Adrian Weller
Pietro Lio'
M. Jamnik
79
52
0
19 Sep 2022
EMaP: Explainable AI with Manifold-based Perturbations
EMaP: Explainable AI with Manifold-based Perturbations
Minh Nhat Vu
Huy Mai
My T. Thai
AAML
35
2
0
18 Sep 2022
Error Controlled Feature Selection for Ultrahigh Dimensional and Highly
  Correlated Feature Space Using Deep Learning
Error Controlled Feature Selection for Ultrahigh Dimensional and Highly Correlated Feature Space Using Deep Learning
Arkaprabha Ganguli
D. Todem
T. Maiti
OOD
23
0
0
15 Sep 2022
If Influence Functions are the Answer, Then What is the Question?
If Influence Functions are the Answer, Then What is the Question?
Juhan Bae
Nathan Ng
Alston Lo
Marzyeh Ghassemi
Roger C. Grosse
TDI
21
88
0
12 Sep 2022
Foundations and Trends in Multimodal Machine Learning: Principles,
  Challenges, and Open Questions
Foundations and Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions
Paul Pu Liang
Amir Zadeh
Louis-Philippe Morency
18
60
0
07 Sep 2022
"Is your explanation stable?": A Robustness Evaluation Framework for
  Feature Attribution
"Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution
Yuyou Gan
Yuhao Mao
Xuhong Zhang
S. Ji
Yuwen Pu
Meng Han
Jianwei Yin
Ting Wang
FAtt
AAML
12
15
0
05 Sep 2022
Generating detailed saliency maps using model-agnostic methods
Generating detailed saliency maps using model-agnostic methods
Maciej Sakowicz
FAtt
15
0
0
04 Sep 2022
Concept-Based Techniques for "Musicologist-friendly" Explanations in a
  Deep Music Classifier
Concept-Based Techniques for "Musicologist-friendly" Explanations in a Deep Music Classifier
Francesco Foscarin
Katharina Hoedt
Verena Praher
A. Flexer
Gerhard Widmer
21
11
0
26 Aug 2022
SoK: Explainable Machine Learning for Computer Security Applications
SoK: Explainable Machine Learning for Computer Security Applications
A. Nadeem
D. Vos
Clinton Cao
Luca Pajola
Simon Dieck
Robert Baumgartner
S. Verwer
34
40
0
22 Aug 2022
SAFARI: Versatile and Efficient Evaluations for Robustness of
  Interpretability
SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability
Wei Huang
Xingyu Zhao
Gao Jin
Xiaowei Huang
AAML
35
29
0
19 Aug 2022
Comparing Baseline Shapley and Integrated Gradients for Local
  Explanation: Some Additional Insights
Comparing Baseline Shapley and Integrated Gradients for Local Explanation: Some Additional Insights
Tianshu Feng
Zhipu Zhou
Tarun Joshi
V. Nair
FAtt
20
4
0
12 Aug 2022
Previous
12345...8910
Next