Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1706.03825
Cited By
SmoothGrad: removing noise by adding noise
12 June 2017
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAtt
ODL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SmoothGrad: removing noise by adding noise"
50 / 1,161 papers shown
Title
Reconstructing Actions To Explain Deep Reinforcement Learning
Xuan Chen
Zifan Wang
Yucai Fan
Bonan Jin
Piotr (Peter) Mardziel
Carlee Joe-Wong
Anupam Datta
FAtt
13
2
0
17 Sep 2020
Captum: A unified and generic model interpretability library for PyTorch
Narine Kokhlikyan
Vivek Miglani
Miguel Martin
Edward Wang
B. Alsallakh
...
Alexander Melnikov
Natalia Kliushkina
Carlos Araya
Siqi Yan
Orion Reblitz-Richardson
FAtt
29
822
0
16 Sep 2020
Are Interpretations Fairly Evaluated? A Definition Driven Pipeline for Post-Hoc Interpretability
Ninghao Liu
Yunsong Meng
Xia Hu
Tie Wang
Bo Long
XAI
FAtt
23
7
0
16 Sep 2020
Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses
Kaivalya Rawal
Himabindu Lakkaraju
27
11
0
15 Sep 2020
MeLIME: Meaningful Local Explanation for Machine Learning Models
T. Botari
Frederik Hvilshoj
Rafael Izbicki
A. Carvalho
AAML
FAtt
30
16
0
12 Sep 2020
Understanding the Role of Individual Units in a Deep Neural Network
David Bau
Jun-Yan Zhu
Hendrik Strobelt
Àgata Lapedriza
Bolei Zhou
Antonio Torralba
GAN
14
436
0
10 Sep 2020
Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient Descent
Ricardo Bigolin Lanfredi
Joyce D. Schroeder
Tolga Tasdizen
27
11
0
10 Sep 2020
How Good is your Explanation? Algorithmic Stability Measures to Assess the Quality of Explanations for Deep Neural Networks
Thomas Fel
David Vigouroux
Rémi Cadène
Thomas Serre
XAI
FAtt
34
31
0
07 Sep 2020
Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset
Erico Tjoa
Cuntai Guan
XAI
FAtt
16
27
0
07 Sep 2020
Explainable Artificial Intelligence for Process Mining: A General Overview and Application of a Novel Local Explanation Approach for Predictive Process Monitoring
Nijat Mehdiyev
Peter Fettke
AI4TS
25
55
0
04 Sep 2020
Estimating Example Difficulty Using Variance of Gradients
Chirag Agarwal
Daniel D'souza
Sara Hooker
213
108
0
26 Aug 2020
How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels
Hua Shen
Ting-Hao 'Kenneth' Huang
FAtt
HAI
17
56
0
26 Aug 2020
Making Neural Networks Interpretable with Attribution: Application to Implicit Signals Prediction
Darius Afchar
Romain Hennequin
FAtt
XAI
39
16
0
26 Aug 2020
DNN2LR: Interpretation-inspired Feature Crossing for Real-world Tabular Data
Zhaocheng Liu
Qiang Liu
Haoli Zhang
Yuntian Chen
19
12
0
22 Aug 2020
A Unified Taylor Framework for Revisiting Attribution Methods
Huiqi Deng
Na Zou
Mengnan Du
Weifu Chen
Guo-Can Feng
Xia Hu
FAtt
TDI
40
21
0
21 Aug 2020
Explainable Recommender Systems via Resolving Learning Representations
Ninghao Liu
Yong Ge
Li Li
Xia Hu
Rui Chen
Soo-Hyun Choi
24
24
0
21 Aug 2020
iCaps: An Interpretable Classifier via Disentangled Capsule Networks
Dahuin Jung
Jonghyun Lee
Jihun Yi
Sungroh Yoon
33
12
0
20 Aug 2020
Intelligence plays dice: Stochasticity is essential for machine learning
M. Sabuncu
32
6
0
17 Aug 2020
Survey of XAI in digital pathology
Milda Pocevičiūtė
Gabriel Eilertsen
Claes Lundström
14
56
0
14 Aug 2020
Can We Trust Your Explanations? Sanity Checks for Interpreters in Android Malware Analysis
Ming Fan
Wenying Wei
Xiaofei Xie
Yang Liu
X. Guan
Ting Liu
FAtt
AAML
22
36
0
13 Aug 2020
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
Dylan Slack
Sophie Hilgard
Sameer Singh
Himabindu Lakkaraju
FAtt
26
162
0
11 Aug 2020
Informative Dropout for Robust Representation Learning: A Shape-bias Perspective
Baifeng Shi
Dinghuai Zhang
Qi Dai
Zhanxing Zhu
Yadong Mu
Jingdong Wang
OOD
27
111
0
10 Aug 2020
Assessing the (Un)Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging
N. Arun
N. Gaw
P. Singh
Ken Chang
M. Aggarwal
...
J. Patel
M. Gidwani
Julius Adebayo
M. D. Li
Jayashree Kalpathy-Cramer
FAtt
27
109
0
06 Aug 2020
Axiom-based Grad-CAM: Towards Accurate Visualization and Explanation of CNNs
Ruigang Fu
Qingyong Hu
Xiaohu Dong
Yulan Guo
Yinghui Gao
Biao Li
FAtt
24
266
0
05 Aug 2020
Weakly-Supervised Cell Tracking via Backward-and-Forward Propagation
Kazuya Nishimura
Junya Hayashida
Chenyang Wang
Dai Fei Elmer Ker
Ryoma Bise
26
17
0
30 Jul 2020
Weakly Supervised Minirhizotron Image Segmentation with MIL-CAM
Guohao Yu
A. Zare
Weihuang Xu
R. Matamala
J. Reyes‐Cabrera
F. Fritschi
T. Juenger
14
12
0
30 Jul 2020
Reliable Tuberculosis Detection using Chest X-ray with Deep Learning, Segmentation and Visualization
Tawsifur Rahman
Amith Khandakar
M. A. Kadir
K. R. Islam
Khandaker F. Islam
...
Tahir Hamid
M. Islam
Z. Mahbub
M. Ayari
M. Chowdhury
22
358
0
29 Jul 2020
Feature visualization of Raman spectrum analysis with deep convolutional neural network
Masashi Fukuhara
Kazuhiko Fujiwara
Y. Maruyama
H. Itoh
6
62
0
27 Jul 2020
Are Visual Explanations Useful? A Case Study in Model-in-the-Loop Prediction
Eric Chu
D. Roy
Jacob Andreas
FAtt
LRM
16
71
0
23 Jul 2020
Deep Active Learning by Model Interpretability
Qiang Liu
Zhaocheng Liu
Xiaofang Zhu
Yeliang Xiu
24
4
0
23 Jul 2020
Pattern-Guided Integrated Gradients
Robert Schwarzenberg
Steffen Castle
20
1
0
21 Jul 2020
A Generic Visualization Approach for Convolutional Neural Networks
Ahmed Taha
Xitong Yang
Abhinav Shrivastava
L. Davis
31
8
0
19 Jul 2020
Multi-Stage Influence Function
Hongge Chen
Si Si
Yongqian Li
Ciprian Chelba
Sanjiv Kumar
Duane S. Boning
Cho-Jui Hsieh
TDI
25
17
0
17 Jul 2020
Sequential Explanations with Mental Model-Based Policies
A. Yeung
Shalmali Joshi
Joseph Jay Williams
Frank Rudzicz
FAtt
LRM
31
15
0
17 Jul 2020
Deep Learning in Protein Structural Modeling and Design
Wenhao Gao
S. Mahajan
Jeremias Sulam
Jeffrey J. Gray
29
159
0
16 Jul 2020
Concept Learners for Few-Shot Learning
Kaidi Cao
Maria Brbic
J. Leskovec
VLM
OffRL
30
4
0
14 Jul 2020
A simple defense against adversarial attacks on heatmap explanations
Laura Rieger
Lars Kai Hansen
FAtt
AAML
33
37
0
13 Jul 2020
Usefulness of interpretability methods to explain deep learning based plant stress phenotyping
Koushik Nagasubramanian
Asheesh K. Singh
Arti Singh
S. Sarkar
Baskar Ganapathysubramanian
FAtt
19
16
0
11 Jul 2020
Scientific Discovery by Generating Counterfactuals using Image Translation
Arunachalam Narayanaswamy
Subhashini Venugopalan
D. Webster
L. Peng
G. Corrado
...
Abigail E. Huang
Siva Balasubramanian
Michael P. Brenner
Phil Q. Nelson
A. Varadarajan
DiffM
MedIm
30
20
0
10 Jul 2020
PointMask: Towards Interpretable and Bias-Resilient Point Cloud Processing
Saeid Asgari Taghanaki
Kaveh Hassani
P. Jayaraman
Amir Hosein Khas Ahmadi
Tonya Custis
3DPC
12
8
0
09 Jul 2020
Evaluation for Weakly Supervised Object Localization: Protocol, Metrics, and Datasets
Junsuk Choe
Seong Joon Oh
Sanghyuk Chun
Seungho Lee
Zeynep Akata
Hyunjung Shim
WSOL
352
23
0
08 Jul 2020
Drug discovery with explainable artificial intelligence
José Jiménez-Luna
F. Grisoni
G. Schneider
30
626
0
01 Jul 2020
Scaling Symbolic Methods using Gradients for Neural Model Explanation
Subham S. Sahoo
Subhashini Venugopalan
Li Li
Rishabh Singh
Patrick F. Riley
FAtt
8
8
0
29 Jun 2020
Invertible Concept-based Explanations for CNN Models with Non-negative Concept Activation Vectors
Ruihan Zhang
Prashan Madumal
Tim Miller
Krista A. Ehinger
Benjamin I. P. Rubinstein
FAtt
24
94
0
27 Jun 2020
BERTology Meets Biology: Interpreting Attention in Protein Language Models
Jesse Vig
Ali Madani
L. Varshney
Caiming Xiong
R. Socher
Nazneen Rajani
29
288
0
26 Jun 2020
Proper Network Interpretability Helps Adversarial Robustness in Classification
Akhilan Boopathy
Sijia Liu
Gaoyuan Zhang
Cynthia Liu
Pin-Yu Chen
Shiyu Chang
Luca Daniel
AAML
FAtt
27
66
0
26 Jun 2020
SS-CAM: Smoothed Score-CAM for Sharper Visual Feature Localization
Haofan Wang
Rakshit Naidu
J. Michael
Soumya Snigdha Kundu
FAtt
30
79
0
25 Jun 2020
Model Explanations with Differential Privacy
Neel Patel
Reza Shokri
Yair Zick
SILM
FedML
28
32
0
16 Jun 2020
Rethinking the Role of Gradient-Based Attribution Methods for Model Interpretability
Suraj Srinivas
F. Fleuret
FAtt
18
1
0
16 Jun 2020
On Saliency Maps and Adversarial Robustness
Puneet Mangla
Vedant Singh
V. Balasubramanian
AAML
24
16
0
14 Jun 2020
Previous
1
2
3
...
18
19
20
...
22
23
24
Next