ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.03825
  4. Cited By
SmoothGrad: removing noise by adding noise

SmoothGrad: removing noise by adding noise

12 June 2017
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
    FAtt
    ODL
ArXivPDFHTML

Papers citing "SmoothGrad: removing noise by adding noise"

50 / 1,161 papers shown
Title
Neural Network Attributions: A Causal Perspective
Neural Network Attributions: A Causal Perspective
Aditya Chattopadhyay
Piyushi Manupriya
Anirban Sarkar
V. Balasubramanian
CML
11
143
0
06 Feb 2019
Fooling Neural Network Interpretations via Adversarial Model
  Manipulation
Fooling Neural Network Interpretations via Adversarial Model Manipulation
Juyeon Heo
Sunghwan Joo
Taesup Moon
AAML
FAtt
18
201
0
06 Feb 2019
Understanding Impacts of High-Order Loss Approximations and Features in
  Deep Learning Interpretation
Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation
Sahil Singla
Eric Wallace
Shi Feng
S. Feizi
FAtt
18
59
0
01 Feb 2019
An Evaluation of the Human-Interpretability of Explanation
An Evaluation of the Human-Interpretability of Explanation
Isaac Lage
Emily Chen
Jeffrey He
Menaka Narayanan
Been Kim
Sam Gershman
Finale Doshi-Velez
FAtt
XAI
13
151
0
31 Jan 2019
Interpreting Deep Neural Networks Through Variable Importance
Interpreting Deep Neural Networks Through Variable Importance
J. Ish-Horowicz
Dana Udwin
Seth Flaxman
Sarah Filippi
Lorin Crawford
FAtt
14
13
0
28 Jan 2019
On the (In)fidelity and Sensitivity for Explanations
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
39
448
0
27 Jan 2019
Toward Explainable Fashion Recommendation
Toward Explainable Fashion Recommendation
Pongsate Tangseng
Takayuki Okatani
21
29
0
15 Jan 2019
Interpretable machine learning: definitions, methods, and applications
Interpretable machine learning: definitions, methods, and applications
W. James Murdoch
Chandan Singh
Karl Kumbier
R. Abbasi-Asl
Bin-Xia Yu
XAI
HAI
47
1,417
0
14 Jan 2019
Attention Branch Network: Learning of Attention Mechanism for Visual
  Explanation
Attention Branch Network: Learning of Attention Mechanism for Visual Explanation
Hiroshi Fukui
Tsubasa Hirakawa
Takayoshi Yamashita
H. Fujiyoshi
XAI
FAtt
11
399
0
25 Dec 2018
AVRA: Automatic Visual Ratings of Atrophy from MRI images using
  Recurrent Convolutional Neural Networks
AVRA: Automatic Visual Ratings of Atrophy from MRI images using Recurrent Convolutional Neural Networks
G. Mårtensson
D. Ferreira
L. Cavallin
J.-Sebastian Muehlboeck
L. Wahlund
Chunliang Wang
E. Westman
33
20
0
23 Dec 2018
A Survey of Safety and Trustworthiness of Deep Neural Networks:
  Verification, Testing, Adversarial Attack and Defence, and Interpretability
A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability
Xiaowei Huang
Daniel Kroening
Wenjie Ruan
Marta Kwiatkowska
Youcheng Sun
Emese Thamo
Min Wu
Xinping Yi
AAML
24
50
0
18 Dec 2018
Context-encoding Variational Autoencoder for Unsupervised Anomaly
  Detection
Context-encoding Variational Autoencoder for Unsupervised Anomaly Detection
David Zimmerer
Simon A. A. Kohl
Jens Petersen
Fabian Isensee
Klaus H. Maier-Hein
DRL
19
128
0
14 Dec 2018
Can I trust you more? Model-Agnostic Hierarchical Explanations
Can I trust you more? Model-Agnostic Hierarchical Explanations
Michael Tsang
Youbang Sun
Dongxu Ren
Yan Liu
FAtt
16
25
0
12 Dec 2018
Diagnostic Visualization for Deep Neural Networks Using Stochastic
  Gradient Langevin Dynamics
Diagnostic Visualization for Deep Neural Networks Using Stochastic Gradient Langevin Dynamics
Biye Jiang
David M. Chan
Tianhao Zhang
John F. Canny
FAtt
17
0
0
11 Dec 2018
Interpretable Deep Learning under Fire
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAML
AI4CE
22
169
0
03 Dec 2018
Discovering Molecular Functional Groups Using Graph Convolutional Neural
  Networks
Discovering Molecular Functional Groups Using Graph Convolutional Neural Networks
Phillip E. Pope
Soheil Kolouri
Mohammad Rostami
Charles E. Martin
Heiko Hoffmann
GNN
38
14
0
01 Dec 2018
Analyzing Federated Learning through an Adversarial Lens
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
191
1,032
0
29 Nov 2018
Representer Point Selection for Explaining Deep Neural Networks
Representer Point Selection for Explaining Deep Neural Networks
Chih-Kuan Yeh
Joon Sik Kim
Ian En-Hsu Yen
Pradeep Ravikumar
TDI
6
243
0
23 Nov 2018
On a Sparse Shortcut Topology of Artificial Neural Networks
On a Sparse Shortcut Topology of Artificial Neural Networks
Fenglei Fan
Dayang Wang
Hengtao Guo
Qikui Zhu
Pingkun Yan
Ge Wang
Hengyong Yu
38
22
0
22 Nov 2018
Bioresorbable Scaffold Visualization in IVOCT Images Using CNNs and
  Weakly Supervised Localization
Bioresorbable Scaffold Visualization in IVOCT Images Using CNNs and Weakly Supervised Localization
N. Gessert
S. Latus
Youssef S. Abdelwahed
D. Leistner
Matthias Lutz
Alexander Schlaefer
11
6
0
22 Oct 2018
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to
  Parameter Values
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Julius Adebayo
Justin Gilmer
Ian Goodfellow
Been Kim
FAtt
AAML
19
128
0
08 Oct 2018
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
64
1,928
0
08 Oct 2018
Interpreting Layered Neural Networks via Hierarchical Modular
  Representation
Interpreting Layered Neural Networks via Hierarchical Modular Representation
C. Watanabe
21
19
0
03 Oct 2018
Training Machine Learning Models by Regularizing their Explanations
Training Machine Learning Models by Regularizing their Explanations
A. Ross
FaML
26
0
0
29 Sep 2018
Rethinking Self-driving: Multi-task Knowledge for Better Generalization
  and Accident Explanation Ability
Rethinking Self-driving: Multi-task Knowledge for Better Generalization and Accident Explanation Ability
Zhihao Li
Toshiyuki Motoyoshi
Kazuma Sasaki
T. Ogata
S. Sugano
LRM
16
39
0
28 Sep 2018
Interpreting Neural Networks With Nearest Neighbors
Interpreting Neural Networks With Nearest Neighbors
Eric Wallace
Shi Feng
Jordan L. Boyd-Graber
AAML
FAtt
MILM
15
53
0
08 Sep 2018
iNNvestigate neural networks!
iNNvestigate neural networks!
Maximilian Alber
Sebastian Lapuschkin
P. Seegerer
Miriam Hagele
Kristof T. Schütt
G. Montavon
Wojciech Samek
K. Müller
Sven Dähne
Pieter-Jan Kindermans
24
348
0
13 Aug 2018
Techniques for Interpretable Machine Learning
Techniques for Interpretable Machine Learning
Mengnan Du
Ninghao Liu
Xia Hu
FaML
39
1,072
0
31 Jul 2018
Model Agnostic Saliency for Weakly Supervised Lesion Detection from
  Breast DCE-MRI
Model Agnostic Saliency for Weakly Supervised Lesion Detection from Breast DCE-MRI
Gabriel Maicas
G. Snaauw
A. Bradley
Ian Reid
G. Carneiro
MedIm
11
14
0
20 Jul 2018
Model Reconstruction from Model Explanations
Model Reconstruction from Model Explanations
S. Milli
Ludwig Schmidt
Anca Dragan
Moritz Hardt
FAtt
21
177
0
13 Jul 2018
Direct Uncertainty Prediction for Medical Second Opinions
Direct Uncertainty Prediction for Medical Second Opinions
M. Raghu
Katy Blumer
Rory Sayres
Ziad Obermeyer
Robert D. Kleinberg
S. Mullainathan
Jon M. Kleinberg
OOD
UD
27
136
0
04 Jul 2018
BayesGrad: Explaining Predictions of Graph Convolutional Networks
BayesGrad: Explaining Predictions of Graph Convolutional Networks
Hirotaka Akita
Kosuke Nakago
Tomoki Komatsu
Yohei Sugawara
S. Maeda
Yukino Baba
H. Kashima
FAtt
OOD
BDL
19
8
0
04 Jul 2018
A Benchmark for Interpretability Methods in Deep Neural Networks
A Benchmark for Interpretability Methods in Deep Neural Networks
Sara Hooker
D. Erhan
Pieter-Jan Kindermans
Been Kim
FAtt
UQCV
31
670
0
28 Jun 2018
This Looks Like That: Deep Learning for Interpretable Image Recognition
This Looks Like That: Deep Learning for Interpretable Image Recognition
Chaofan Chen
Oscar Li
Chaofan Tao
A. Barnett
Jonathan Su
Cynthia Rudin
59
1,157
0
27 Jun 2018
xGEMs: Generating Examplars to Explain Black-Box Models
xGEMs: Generating Examplars to Explain Black-Box Models
Shalmali Joshi
Oluwasanmi Koyejo
Been Kim
Joydeep Ghosh
MLAU
25
40
0
22 Jun 2018
Maximally Invariant Data Perturbation as Explanation
Maximally Invariant Data Perturbation as Explanation
Satoshi Hara
Kouichi Ikeno
Tasuku Soma
Takanori Maehara
AAML
14
8
0
19 Jun 2018
Hierarchical interpretations for neural network predictions
Hierarchical interpretations for neural network predictions
Chandan Singh
W. James Murdoch
Bin Yu
31
145
0
14 Jun 2018
Producing radiologist-quality reports for interpretable artificial
  intelligence
Producing radiologist-quality reports for interpretable artificial intelligence
William Gale
Luke Oakden-Rayner
G. Carneiro
A. Bradley
L. Palmer
MedIm
19
46
0
01 Jun 2018
Explaining Explanations: An Overview of Interpretability of Machine
  Learning
Explaining Explanations: An Overview of Interpretability of Machine Learning
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
XAI
40
1,842
0
31 May 2018
Semantic Network Interpretation
Semantic Network Interpretation
Pei Guo
Ryan Farrell
MILM
FAtt
15
0
0
23 May 2018
Learning what and where to attend
Learning what and where to attend
Drew Linsley
Dan Scheibler
S. Eberhardt
Thomas Serre
22
32
0
22 May 2018
Generalizing multistain immunohistochemistry tissue segmentation using
  one-shot color deconvolution deep neural networks
Generalizing multistain immunohistochemistry tissue segmentation using one-shot color deconvolution deep neural networks
Amal Lahiani
J. Gildenblat
I. Klaman
Nassir Navab
Eldad Klaiman
13
15
0
17 May 2018
Pathologies of Neural Models Make Interpretations Difficult
Pathologies of Neural Models Make Interpretations Difficult
Shi Feng
Eric Wallace
Alvin Grissom II
Mohit Iyyer
Pedro Rodriguez
Jordan L. Boyd-Graber
AAML
FAtt
13
317
0
20 Apr 2018
Single Day Outdoor Photometric Stereo
Single Day Outdoor Photometric Stereo
Yannick Hold-Geoffroy
Paulo F. U. Gotardo
Jean-François Lalonde
25
15
0
28 Mar 2018
Classification of crystallization outcomes using deep convolutional
  neural networks
Classification of crystallization outcomes using deep convolutional neural networks
Andrew E. Bruno
P. Charbonneau
J. Newman
E. Snell
David R. So
Vincent Vanhoucke
Christopher J. Watkins
Shawn Williams
Julie Wilson
13
67
0
27 Mar 2018
Towards Explanation of DNN-based Prediction with Guided Feature
  Inversion
Towards Explanation of DNN-based Prediction with Guided Feature Inversion
Mengnan Du
Ninghao Liu
Qingquan Song
Xia Hu
FAtt
12
125
0
19 Mar 2018
Interpreting Deep Classifier by Visual Distillation of Dark Knowledge
Interpreting Deep Classifier by Visual Distillation of Dark Knowledge
Kai Xu
Dae Hoon Park
Chang Yi
Charles Sutton
HAI
FAtt
14
26
0
11 Mar 2018
Understanding and Enhancing the Transferability of Adversarial Examples
Understanding and Enhancing the Transferability of Adversarial Examples
Lei Wu
Zhanxing Zhu
Cheng Tai
E. Weinan
AAML
SILM
30
96
0
27 Feb 2018
Exact and Consistent Interpretation for Piecewise Linear Neural
  Networks: A Closed Form Solution
Exact and Consistent Interpretation for Piecewise Linear Neural Networks: A Closed Form Solution
Lingyang Chu
X. Hu
Juhua Hu
Lanjun Wang
J. Pei
12
99
0
17 Feb 2018
Granger-causal Attentive Mixtures of Experts: Learning Important
  Features with Neural Networks
Granger-causal Attentive Mixtures of Experts: Learning Important Features with Neural Networks
Patrick Schwab
Djordje Miladinovic
W. Karlen
CML
21
57
0
06 Feb 2018
Previous
123...222324
Next