ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.03825
  4. Cited By
SmoothGrad: removing noise by adding noise

SmoothGrad: removing noise by adding noise

12 June 2017
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
    FAtt
    ODL
ArXivPDFHTML

Papers citing "SmoothGrad: removing noise by adding noise"

50 / 1,161 papers shown
Title
Making Sense of Dependence: Efficient Black-box Explanations Using
  Dependence Measure
Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure
Paul Novello
Thomas Fel
David Vigouroux
FAtt
22
28
0
13 Jun 2022
Geometrically Guided Integrated Gradients
Geometrically Guided Integrated Gradients
Md. Mahfuzur Rahman
N. Lewis
Sergey Plis
FAtt
AAML
18
0
0
13 Jun 2022
A Functional Information Perspective on Model Interpretation
A Functional Information Perspective on Model Interpretation
Itai Gat
Nitay Calderon
Roi Reichart
Tamir Hazan
AAML
FAtt
43
6
0
12 Jun 2022
Diffeomorphic Counterfactuals with Generative Models
Diffeomorphic Counterfactuals with Generative Models
Ann-Kathrin Dombrowski
Jan E. Gerken
Klaus-Robert Muller
Pan Kessel
DiffM
BDL
40
15
0
10 Jun 2022
GAMR: A Guided Attention Model for (visual) Reasoning
GAMR: A Guided Attention Model for (visual) Reasoning
Mohit Vaishnav
Thomas Serre
LRM
27
16
0
10 Jun 2022
Learning to Estimate Shapley Values with Vision Transformers
Learning to Estimate Shapley Values with Vision Transformers
Ian Covert
Chanwoo Kim
Su-In Lee
FAtt
25
35
0
10 Jun 2022
DORA: Exploring Outlier Representations in Deep Neural Networks
DORA: Exploring Outlier Representations in Deep Neural Networks
Kirill Bykov
Mayukh Deb
Dennis Grinwald
Klaus-Robert Muller
Marina M.-C. Höhne
27
12
0
09 Jun 2022
Challenges in Applying Explainability Methods to Improve the Fairness of
  NLP Models
Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Esma Balkir
S. Kiritchenko
I. Nejadgholi
Kathleen C. Fraser
26
36
0
08 Jun 2022
From Attribution Maps to Human-Understandable Explanations through
  Concept Relevance Propagation
From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation
Reduan Achtibat
Maximilian Dreyer
Ilona Eisenbraun
S. Bosse
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
FAtt
36
134
0
07 Jun 2022
Saliency Cards: A Framework to Characterize and Compare Saliency Methods
Saliency Cards: A Framework to Characterize and Compare Saliency Methods
Angie Boggust
Harini Suresh
Hendrik Strobelt
John Guttag
Arvindmani Satyanarayan
FAtt
XAI
32
8
0
07 Jun 2022
A Human-Centric Take on Model Monitoring
A Human-Centric Take on Model Monitoring
Murtuza N. Shergadwala
Himabindu Lakkaraju
K. Kenthapadi
45
9
0
06 Jun 2022
Dual Decomposition of Convex Optimization Layers for Consistent
  Attention in Medical Images
Dual Decomposition of Convex Optimization Layers for Consistent Attention in Medical Images
Tom Ron
M. Weiler-Sagie
Tamir Hazan
FAtt
MedIm
29
6
0
06 Jun 2022
Use-Case-Grounded Simulations for Explanation Evaluation
Use-Case-Grounded Simulations for Explanation Evaluation
Valerie Chen
Nari Johnson
Nicholay Topin
Gregory Plumb
Ameet Talwalkar
FAtt
ELM
27
24
0
05 Jun 2022
Interpretable Mixture of Experts
Interpretable Mixture of Experts
Aya Abdelsalam Ismail
Sercan O. Arik
Jinsung Yoon
Ankur Taly
S. Feizi
Tomas Pfister
MoE
31
10
0
05 Jun 2022
HEX: Human-in-the-loop Explainability via Deep Reinforcement Learning
HEX: Human-in-the-loop Explainability via Deep Reinforcement Learning
Michael T. Lash
35
0
0
02 Jun 2022
Which Explanation Should I Choose? A Function Approximation Perspective
  to Characterizing Post Hoc Explanations
Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations
Tessa Han
Suraj Srinivas
Himabindu Lakkaraju
FAtt
50
86
0
02 Jun 2022
Optimizing Relevance Maps of Vision Transformers Improves Robustness
Optimizing Relevance Maps of Vision Transformers Improves Robustness
Hila Chefer
Idan Schwartz
Lior Wolf
ViT
42
38
0
02 Jun 2022
Why Did This Model Forecast This Future? Closed-Form Temporal Saliency
  Towards Causal Explanations of Probabilistic Forecasts
Why Did This Model Forecast This Future? Closed-Form Temporal Saliency Towards Causal Explanations of Probabilistic Forecasts
Chirag Raman
Hayley Hung
Marco Loog
31
3
0
01 Jun 2022
Attribution-based Explanations that Provide Recourse Cannot be Robust
Attribution-based Explanations that Provide Recourse Cannot be Robust
H. Fokkema
R. D. Heide
T. Erven
FAtt
52
18
0
31 May 2022
Comparing interpretation methods in mental state decoding analyses with
  deep learning models
Comparing interpretation methods in mental state decoding analyses with deep learning models
A. Thomas
Christopher Ré
R. Poldrack
AI4CE
39
2
0
31 May 2022
Searching for the Essence of Adversarial Perturbations
Searching for the Essence of Adversarial Perturbations
Dennis Y. Menn
Tzu-hsun Feng
Hung-yi Lee
AAML
4
1
0
30 May 2022
CHALLENGER: Training with Attribution Maps
CHALLENGER: Training with Attribution Maps
Christian Tomani
Daniel Cremers
12
1
0
30 May 2022
Saliency Map Based Data Augmentation
Saliency Map Based Data Augmentation
Jalal Al-Afandi
B. Magyar
András Horváth
16
0
0
29 May 2022
How explainable are adversarially-robust CNNs?
How explainable are adversarially-robust CNNs?
Mehdi Nourelahi
Lars Kotthoff
Peijie Chen
Anh Totti Nguyen
AAML
FAtt
24
8
0
25 May 2022
Deletion and Insertion Tests in Regression Models
Deletion and Insertion Tests in Regression Models
Naofumi Hama
Masayoshi Mase
Art B. Owen
27
8
0
25 May 2022
Faithful Explanations for Deep Graph Models
Faithful Explanations for Deep Graph Models
Zifan Wang
Yuhang Yao
Chaoran Zhang
Han Zhang
Youjie Kang
Carlee Joe-Wong
Matt Fredrikson
Anupam Datta
FAtt
24
2
0
24 May 2022
A Fine-grained Interpretability Evaluation Benchmark for Neural NLP
A Fine-grained Interpretability Evaluation Benchmark for Neural NLP
Lijie Wang
Yaozong Shen
Shu-ping Peng
Shuai Zhang
Xinyan Xiao
Hao Liu
Hongxuan Tang
Ying-Cong Chen
Hua Wu
Haifeng Wang
ELM
21
21
0
23 May 2022
Learnable Visual Words for Interpretable Image Recognition
Learnable Visual Words for Interpretable Image Recognition
Wenxi Xiao
Zhengming Ding
Hongfu Liu
VLM
30
2
0
22 May 2022
Towards Better Understanding Attribution Methods
Towards Better Understanding Attribution Methods
Sukrut Rao
Moritz Bohle
Bernt Schiele
XAI
28
32
0
20 May 2022
The Solvability of Interpretability Evaluation Metrics
The Solvability of Interpretability Evaluation Metrics
Yilun Zhou
J. Shah
78
8
0
18 May 2022
Policy Distillation with Selective Input Gradient Regularization for
  Efficient Interpretability
Policy Distillation with Selective Input Gradient Regularization for Efficient Interpretability
Jinwei Xing
Takashi Nagata
Xinyun Zou
Emre Neftci
J. Krichmar
AAML
25
4
0
18 May 2022
Fairness via Explanation Quality: Evaluating Disparities in the Quality
  of Post hoc Explanations
Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations
Jessica Dai
Sohini Upadhyay
Ulrich Aïvodji
Stephen H. Bach
Himabindu Lakkaraju
53
57
0
15 May 2022
Clinical outcome prediction under hypothetical interventions -- a
  representation learning framework for counterfactual reasoning
Clinical outcome prediction under hypothetical interventions -- a representation learning framework for counterfactual reasoning
Yikuan Li
M. Mamouei
Shishir Rao
A. Hassaine
D. Canoy
Thomas Lukasiewicz
K. Rahimi
G. Salimi-Khorshidi
OOD
CML
AI4CE
31
1
0
15 May 2022
Explainable Deep Learning Methods in Medical Image Classification: A
  Survey
Explainable Deep Learning Methods in Medical Image Classification: A Survey
Cristiano Patrício
João C. Neves
Luís F. Teixeira
XAI
26
53
0
10 May 2022
How Does Frequency Bias Affect the Robustness of Neural Image
  Classifiers against Common Corruption and Adversarial Perturbations?
How Does Frequency Bias Affect the Robustness of Neural Image Classifiers against Common Corruption and Adversarial Perturbations?
Alvin Chan
Yew-Soon Ong
Clement Tan
AAML
24
13
0
09 May 2022
Do Different Deep Metric Learning Losses Lead to Similar Learned
  Features?
Do Different Deep Metric Learning Losses Lead to Similar Learned Features?
Konstantin Kobs
M. Steininger
Andrzej Dulny
Andreas Hotho
19
7
0
05 May 2022
Understanding CNNs from excitations
Understanding CNNs from excitations
Zijian Ying
Qianmu Li
Zhichao Lian
Jun Hou
Tong Lin
Tao Wang
AAML
FAtt
21
1
0
02 May 2022
Poly-CAM: High resolution class activation map for convolutional neural
  networks
Poly-CAM: High resolution class activation map for convolutional neural networks
A. Englebert
O. Cornu
Christophe De Vleeschouwer
30
10
0
28 Apr 2022
On the Limitations of Dataset Balancing: The Lost Battle Against
  Spurious Correlations
On the Limitations of Dataset Balancing: The Lost Battle Against Spurious Correlations
Roy Schwartz
Gabriel Stanovsky
42
26
0
27 Apr 2022
Locally Aggregated Feature Attribution on Natural Language Model
  Understanding
Locally Aggregated Feature Attribution on Natural Language Model Understanding
Shenmin Zhang
Jin Wang
Haitao Jiang
Rui Song
FAtt
32
3
0
22 Apr 2022
Behind the Machine's Gaze: Neural Networks with Biologically-inspired
  Constraints Exhibit Human-like Visual Attention
Behind the Machine's Gaze: Neural Networks with Biologically-inspired Constraints Exhibit Human-like Visual Attention
Leo Schwinn
Doina Precup
Bjoern M. Eskofier
Dario Zanca
20
7
0
19 Apr 2022
Missingness Bias in Model Debugging
Missingness Bias in Model Debugging
Saachi Jain
Hadi Salman
E. Wong
Pengchuan Zhang
Vibhav Vineet
Sai H. Vemprala
Aleksander Madry
27
37
0
19 Apr 2022
OccAM's Laser: Occlusion-based Attribution Maps for 3D Object Detectors
  on LiDAR Data
OccAM's Laser: Occlusion-based Attribution Maps for 3D Object Detectors on LiDAR Data
David Schinagl
Georg Krispel
Horst Possegger
P. Roth
Horst Bischof
3DPC
34
16
0
13 Apr 2022
Maximum Entropy Baseline for Integrated Gradients
Maximum Entropy Baseline for Integrated Gradients
Hanxiao Tan
FAtt
24
4
0
12 Apr 2022
A Multilingual Perspective Towards the Evaluation of Attribution Methods
  in Natural Language Inference
A Multilingual Perspective Towards the Evaluation of Attribution Methods in Natural Language Inference
Kerem Zaman
Yonatan Belinkov
29
8
0
11 Apr 2022
Re-Examining Human Annotations for Interpretable NLP
Re-Examining Human Annotations for Interpretable NLP
Cheng-Han Chiang
Hung-yi Lee
FAtt
XAI
30
6
0
10 Apr 2022
Explainable and Interpretable Diabetic Retinopathy Classification Based
  on Neural-Symbolic Learning
Explainable and Interpretable Diabetic Retinopathy Classification Based on Neural-Symbolic Learning
Se-In Jang
M. Girard
Alexandre Hoang Thiery
20
4
0
01 Apr 2022
Improving Adversarial Transferability via Neuron Attribution-Based
  Attacks
Improving Adversarial Transferability via Neuron Attribution-Based Attacks
Jianping Zhang
Weibin Wu
Jen-tse Huang
Yizhan Huang
Wenxuan Wang
Yuxin Su
Michael R. Lyu
AAML
45
130
0
31 Mar 2022
Towards Interpretable Deep Reinforcement Learning Models via Inverse
  Reinforcement Learning
Towards Interpretable Deep Reinforcement Learning Models via Inverse Reinforcement Learning
Yuansheng Xie
Soroush Vosoughi
Saeed Hassanpour
29
2
0
30 Mar 2022
Recognition of polar lows in Sentinel-1 SAR images with deep learning
Recognition of polar lows in Sentinel-1 SAR images with deep learning
J. Grahn
F. Bianchi
38
3
0
30 Mar 2022
Previous
123...111213...222324
Next