ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.04205
  4. Cited By
A Note about: Local Explanation Methods for Deep Neural Networks lack
  Sensitivity to Parameter Values

A Note about: Local Explanation Methods for Deep Neural Networks lack Sensitivity to Parameter Values

11 June 2018
Mukund Sundararajan
Ankur Taly
    FAtt
ArXiv (abs)PDFHTML

Papers citing "A Note about: Local Explanation Methods for Deep Neural Networks lack Sensitivity to Parameter Values"

13 / 13 papers shown
Title
Tangentially Aligned Integrated Gradients for User-Friendly ExplanationsIrish Conference on Artificial Intelligence and Cognitive Science (AICS), 2025
Lachlan Simpson
Federico Costanza
Kyle Millar
A. Cheng
Cheng-Chew Lim
Hong-Gunn Chew
FAtt
272
5
0
11 Mar 2025
From Flexibility to Manipulation: The Slippery Slope of XAI Evaluation
From Flexibility to Manipulation: The Slippery Slope of XAI Evaluation
Kristoffer Wickstrøm
Marina M.-C. Höhne
Anna Hedström
AAML
356
5
0
07 Dec 2024
Expected Grad-CAM: Towards gradient faithfulness
Expected Grad-CAM: Towards gradient faithfulness
Vincenzo Buono
Peyman Sheikholharam Mashhadi
M. Rahat
Prayag Tiwari
Stefan Byttner
FAtt
207
3
0
03 Jun 2024
A Fresh Look at Sanity Checks for Saliency Maps
A Fresh Look at Sanity Checks for Saliency Maps
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
FAttLRM
260
11
0
03 May 2024
Sanity Checks Revisited: An Exploration to Repair the Model Parameter
  Randomisation Test
Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
LRM
205
10
0
12 Jan 2024
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable
  Estimators with MetaQuantus
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Anna Hedström
P. Bommer
Kristoffer K. Wickstrom
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
280
27
0
14 Feb 2023
Towards Improved Input Masking for Convolutional Neural Networks
Towards Improved Input Masking for Convolutional Neural NetworksIEEE International Conference on Computer Vision (ICCV), 2022
S. Balasubramanian
Soheil Feizi
AAML
200
7
0
26 Nov 2022
Shortcomings of Top-Down Randomization-Based Sanity Checks for
  Evaluations of Deep Neural Network Explanations
Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network ExplanationsComputer Vision and Pattern Recognition (CVPR), 2022
Alexander Binder
Leander Weber
Sebastian Lapuschkin
G. Montavon
Klaus-Robert Muller
Wojciech Samek
FAttAAML
156
31
0
22 Nov 2022
Comparing Baseline Shapley and Integrated Gradients for Local
  Explanation: Some Additional Insights
Comparing Baseline Shapley and Integrated Gradients for Local Explanation: Some Additional Insights
Tianshu Feng
Zhipu Zhou
Tarun Joshi
V. Nair
FAtt
178
6
0
12 Aug 2022
GANMEX: One-vs-One Attributions Guided by GAN-based Counterfactual
  Explanation Baselines
GANMEX: One-vs-One Attributions Guided by GAN-based Counterfactual Explanation BaselinesInternational Conference on Machine Learning (ICML), 2020
Sheng-Min Shih
Pin-Ju Tien
Zohar Karnin
FAtt
289
16
0
11 Nov 2020
Explaining Explanations: Axiomatic Feature Interactions for Deep
  Networks
Explaining Explanations: Axiomatic Feature Interactions for Deep NetworksJournal of machine learning research (JMLR), 2020
Joseph D. Janizek
Pascal Sturmfels
Su-In Lee
FAtt
638
167
0
10 Feb 2020
XRAI: Better Attributions Through Regions
XRAI: Better Attributions Through RegionsIEEE International Conference on Computer Vision (ICCV), 2019
A. Kapishnikov
Tolga Bolukbasi
Fernanda Viégas
Michael Terry
FAttXAI
188
242
0
06 Jun 2019
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to
  Parameter Values
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Julius Adebayo
Justin Gilmer
Ian Goodfellow
Been Kim
FAttAAML
177
130
0
08 Oct 2018
1