ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.04893
  4. Cited By
Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency
  Maps
v1v2v3 (latest)

Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps

13 February 2019
Beomsu Kim
Junghoon Seo
Seunghyun Jeon
Jamyoung Koo
J. Choe
Taegyun Jeon
    FAtt
ArXiv (abs)PDFHTML

Papers citing "Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps"

36 / 36 papers shown
Query Circuits: Explaining How Language Models Answer User Prompts
Query Circuits: Explaining How Language Models Answer User Prompts
Tung-Yu Wu
Fazl Barez
ReLMLRM
203
0
0
29 Sep 2025
On Spectral Properties of Gradient-based Explanation Methods
On Spectral Properties of Gradient-based Explanation MethodsEuropean Conference on Computer Vision (ECCV), 2025
Amir Mehrpanah
Erik Englesson
Hossein Azizpour
FAtt
200
2
0
14 Aug 2025
On the Complexity-Faithfulness Trade-off of Gradient-Based Explanations
On the Complexity-Faithfulness Trade-off of Gradient-Based Explanations
Amir Mehrpanah
Matteo Gamba
Kevin Smith
Hossein Azizpour
FAtt
198
0
0
14 Aug 2025
Attribution Explanations for Deep Neural Networks: A Theoretical Perspective
Attribution Explanations for Deep Neural Networks: A Theoretical Perspective
Huiqi Deng
Hongbin Pei
Quanshi Zhang
Mengnan Du
FAtt
236
1
0
11 Aug 2025
AutoSIGHT: Automatic Eye Tracking-based System for Immediate Grading of Human experTise
AutoSIGHT: Automatic Eye Tracking-based System for Immediate Grading of Human experTise
Byron Dowling
Jozef Probcin
Adam Czajka
147
1
0
01 Aug 2025
Uncertainty Propagation in XAI: A Comparison of Analytical and Empirical Estimators
Uncertainty Propagation in XAI: A Comparison of Analytical and Empirical Estimators
Teodor Chiaburu
Felix Bießmann
Frank Haußer
333
5
0
01 Apr 2025
Unifying Perplexing Behaviors in Modified BP Attributions through Alignment Perspective
Unifying Perplexing Behaviors in Modified BP Attributions through Alignment Perspective
Guanhua Zheng
Jitao Sang
Changsheng Xu
AAMLFAtt
349
0
0
14 Mar 2025
Emergent Language in Open-Ended Environments
Emergent Language in Open-Ended Environments
Cornelius Wolff
Julius Mayer
Elia Bruni
Xenia Ohmer
LLMAG
210
0
0
26 Aug 2024
Expected Grad-CAM: Towards gradient faithfulness
Expected Grad-CAM: Towards gradient faithfulness
Vincenzo Buono
Peyman Sheikholharam Mashhadi
M. Rahat
Prayag Tiwari
Stefan Byttner
FAtt
314
4
0
03 Jun 2024
CoProNN: Concept-based Prototypical Nearest Neighbors for Explaining
  Vision Models
CoProNN: Concept-based Prototypical Nearest Neighbors for Explaining Vision Models
Teodor Chiaburu
Frank Haußer
Felix Bießmann
314
5
0
23 Apr 2024
Structured Gradient-based Interpretations via Norm-Regularized
  Adversarial Training
Structured Gradient-based Interpretations via Norm-Regularized Adversarial Training
Shizhan Gong
Qi Dou
Farzan Farnia
FAtt
343
6
0
06 Apr 2024
Gradient based Feature Attribution in Explainable AI: A Technical Review
Gradient based Feature Attribution in Explainable AI: A Technical Review
Yongjie Wang
Tong Zhang
Xu Guo
Zhiqi Shen
XAI
387
51
0
15 Mar 2024
Saliency strikes back: How filtering out high frequencies improves
  white-box explanations
Saliency strikes back: How filtering out high frequencies improves white-box explanationsInternational Conference on Machine Learning (ICML), 2023
Sabine Muzellec
Thomas Fel
Victor Boutin
Léo Andéol
R. V. Rullen
Thomas Serre
FAtt
566
4
0
18 Jul 2023
Towards Robust Aspect-based Sentiment Analysis through
  Non-counterfactual Augmentations
Towards Robust Aspect-based Sentiment Analysis through Non-counterfactual Augmentations
Xinyu Liu
Yanl Ding
Kaikai An
Chunyang Xiao
Pranava Madhyastha
Tong Xiao
Jingbo Zhu
217
2
0
24 Jun 2023
Causal Analysis for Robust Interpretability of Neural Networks
Causal Analysis for Robust Interpretability of Neural NetworksIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2023
Ola Ahmad
Nicolas Béreux
Loïc Baret
V. Hashemi
Freddy Lecue
CML
413
13
0
15 May 2023
On Pitfalls of $\textit{RemOve-And-Retrain}$: Data Processing Inequality Perspective
On Pitfalls of RemOve-And-Retrain\textit{RemOve-And-Retrain}RemOve-And-Retrain: Data Processing Inequality Perspective
J. Song
Keumgang Cha
Junghoon Seo
413
2
0
26 Apr 2023
New Definitions and Evaluations for Saliency Methods: Staying Intrinsic,
  Complete and Sound
New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and SoundNeural Information Processing Systems (NeurIPS), 2022
Arushi Gupta
Nikunj Saunshi
Dingli Yu
Kaifeng Lyu
Sanjeev Arora
AAMLFAttXAI
179
9
0
05 Nov 2022
Evaluation of importance estimators in deep learning classifiers for
  Computed Tomography
Evaluation of importance estimators in deep learning classifiers for Computed Tomography
L. Brocki
Wistan Marchadour
Jonas Maison
B. Badic
P. Papadimitroulas
M. Hatt
Franck Vermet
N. C. Chung
134
4
0
30 Sep 2022
Guiding Visual Attention in Deep Convolutional Neural Networks Based on
  Human Eye Movements
Guiding Visual Attention in Deep Convolutional Neural Networks Based on Human Eye MovementsFrontiers in Neuroscience (Front. Neurosci.), 2022
Leonard E. van Dyck
Sebastian J. Denzler
W. Gruber
183
10
0
21 Jun 2022
Understanding CNNs from excitations
Understanding CNNs from excitationsIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2022
Zijian Ying
Qianmu Li
Zhichao Lian
Jun Hou
Tong Lin
Tao Wang
AAMLFAtt
463
2
0
02 May 2022
Fidelity of Interpretability Methods and Perturbation Artifacts in
  Neural Networks
Fidelity of Interpretability Methods and Perturbation Artifacts in Neural Networks
L. Brocki
N. C. Chung
AAML
245
6
0
06 Mar 2022
Deeply Explain CNN via Hierarchical Decomposition
Deeply Explain CNN via Hierarchical DecompositionInternational Journal of Computer Vision (IJCV), 2022
Mingg-Ming Cheng
Peng-Tao Jiang
Linghao Han
Liang Wang
Juil Sock
FAtt
224
17
0
23 Jan 2022
Evaluation of Interpretability for Deep Learning algorithms in EEG
  Emotion Recognition: A case study in Autism
Evaluation of Interpretability for Deep Learning algorithms in EEG Emotion Recognition: A case study in Autism
J. M. M. Torres
Sara E. Medina-DeVilliers
T. Clarkson
M. Lerner
Giuseppe Riccardi
495
59
0
25 Nov 2021
Surrogate Model-Based Explainability Methods for Point Cloud NNs
Surrogate Model-Based Explainability Methods for Point Cloud NNsIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2021
Hanxiao Tan
Helena Kotthaus
3DPC
261
37
0
28 Jul 2021
Deep Learning Image Recognition for Non-images
Deep Learning Image Recognition for Non-images
Boris Kovalerchuk
D. Kalla
Bedant Agarwal
184
3
0
28 Jun 2021
Do Input Gradients Highlight Discriminative Features?
Do Input Gradients Highlight Discriminative Features?Neural Information Processing Systems (NeurIPS), 2021
Harshay Shah
Prateek Jain
Praneeth Netrapalli
AAMLFAtt
414
69
0
25 Feb 2021
Attribution Mask: Filtering Out Irrelevant Features By Recursively
  Focusing Attention on Inputs of DNNs
Attribution Mask: Filtering Out Irrelevant Features By Recursively Focusing Attention on Inputs of DNNs
Jaehwan Lee
Joon‐Hyuk Chang
TDIFAtt
250
0
0
15 Feb 2021
Advances in Electron Microscopy with Deep Learning
Advances in Electron Microscopy with Deep Learning
Jeffrey M. Ede
836
3
0
04 Jan 2021
Rethinking Positive Aggregation and Propagation of Gradients in
  Gradient-based Saliency Methods
Rethinking Positive Aggregation and Propagation of Gradients in Gradient-based Saliency Methods
Ashkan Khakzar
Soroosh Baselizadeh
Nassir Navab
FAtt
172
8
0
01 Dec 2020
Input Bias in Rectified Gradients and Modified Saliency Maps
Input Bias in Rectified Gradients and Modified Saliency Maps
L. Brocki
N. C. Chung
FAttAAMLXAI
216
3
0
10 Nov 2020
Review: Deep Learning in Electron Microscopy
Review: Deep Learning in Electron Microscopy
Jeffrey M. Ede
1.1K
95
0
17 Sep 2020
Explaining Regression Based Neural Network Model
Explaining Regression Based Neural Network Model
Mégane Millan
Catherine Achard
FAtt
142
3
0
15 Apr 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
When Explanations Lie: Why Many Modified BP Attributions FailInternational Conference on Machine Learning (ICML), 2019
Leon Sixt
Maximilian Granz
Tim Landgraf
BDLFAttXAI
659
147
0
20 Dec 2019
Improving Feature Attribution through Input-specific Network Pruning
Improving Feature Attribution through Input-specific Network Pruning
Ashkan Khakzar
Soroosh Baselizadeh
Saurabh Khanduja
Christian Rupprecht
S. T. Kim
Nassir Navab
FAtt
392
13
0
25 Nov 2019
Concept Saliency Maps to Visualize Relevant Features in Deep Generative
  Models
Concept Saliency Maps to Visualize Relevant Features in Deep Generative ModelsInternational Conference on Machine Learning and Applications (ICMLA), 2019
L. Brocki
N. C. Chung
FAtt
99
24
0
29 Oct 2019
Saliency is a Possible Red Herring When Diagnosing Poor Generalization
Saliency is a Possible Red Herring When Diagnosing Poor Generalization
J. Viviano
B. Simpson
Francis Dutil
Yoshua Bengio
Joseph Paul Cohen
FAtt
294
6
0
01 Oct 2019
1
Page 1 of 1