ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.03825
  4. Cited By
SmoothGrad: removing noise by adding noise

SmoothGrad: removing noise by adding noise

12 June 2017
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
    FAtt
    ODL
ArXivPDFHTML

Papers citing "SmoothGrad: removing noise by adding noise"

50 / 1,161 papers shown
Title
NeuroMask: Explaining Predictions of Deep Neural Networks through Mask
  Learning
NeuroMask: Explaining Predictions of Deep Neural Networks through Mask Learning
M. Alzantot
Amy Widdicombe
S. Julier
Mani B. Srivastava
AAML
FAtt
20
3
0
05 Aug 2019
Semi-supervised Thai Sentence Segmentation Using Local and Distant Word
  Representations
Semi-supervised Thai Sentence Segmentation Using Local and Distant Word Representations
Chanatip Saetia
E. Chuangsuwanich
Tawunrat Chalothorn
P. Vateekul
15
5
0
04 Aug 2019
Smooth Grad-CAM++: An Enhanced Inference Level Visualization Technique
  for Deep Convolutional Neural Network Models
Smooth Grad-CAM++: An Enhanced Inference Level Visualization Technique for Deep Convolutional Neural Network Models
Daniel Omeiza
Skyler Speakman
C. Cintas
Komminist Weldemariam
FAtt
22
216
0
03 Aug 2019
Grid Saliency for Context Explanations of Semantic Segmentation
Grid Saliency for Context Explanations of Semantic Segmentation
Lukas Hoyer
Mauricio Muñoz
P. Katiyar
Anna Khoreva
Volker Fischer
FAtt
25
48
0
30 Jul 2019
explAIner: A Visual Analytics Framework for Interactive and Explainable
  Machine Learning
explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning
Thilo Spinner
U. Schlegel
H. Schäfer
Mennatallah El-Assady
HAI
15
234
0
29 Jul 2019
How to Manipulate CNNs to Make Them Lie: the GradCAM Case
How to Manipulate CNNs to Make Them Lie: the GradCAM Case
T. Viering
Ziqi Wang
Marco Loog
E. Eisemann
AAML
FAtt
17
28
0
25 Jul 2019
Benchmarking Attribution Methods with Relative Feature Importance
Benchmarking Attribution Methods with Relative Feature Importance
Mengjiao Yang
Been Kim
FAtt
XAI
21
140
0
23 Jul 2019
Information-Bottleneck Approach to Salient Region Discovery
Information-Bottleneck Approach to Salient Region Discovery
A. Zhmoginov
Ian S. Fischer
Mark Sandler
12
18
0
22 Jul 2019
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical
  XAI
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI
Erico Tjoa
Cuntai Guan
XAI
56
1,413
0
17 Jul 2019
Explaining Classifiers with Causal Concept Effect (CaCE)
Explaining Classifiers with Causal Concept Effect (CaCE)
Yash Goyal
Amir Feder
Uri Shalit
Been Kim
CML
19
172
0
16 Jul 2019
Explaining an increase in predicted risk for clinical alerts
Explaining an increase in predicted risk for clinical alerts
Michaela Hardt
A. Rajkomar
Gerardo Flores
Andrew M. Dai
M. Howell
Greg S. Corrado
Claire Cui
Moritz Hardt
FAtt
17
12
0
10 Jul 2019
The What-If Tool: Interactive Probing of Machine Learning Models
The What-If Tool: Interactive Probing of Machine Learning Models
James Wexler
Mahima Pushkarna
Tolga Bolukbasi
Martin Wattenberg
F. Viégas
Jimbo Wilson
VLM
57
484
0
09 Jul 2019
ELF: Embedded Localisation of Features in pre-trained CNN
ELF: Embedded Localisation of Features in pre-trained CNN
Assia Benbihi
M. Geist
C´edric Pradalier
19
30
0
07 Jul 2019
Towards Robust, Locally Linear Deep Networks
Towards Robust, Locally Linear Deep Networks
Guang-He Lee
David Alvarez-Melis
Tommi Jaakkola
ODL
19
48
0
07 Jul 2019
On the Privacy Risks of Model Explanations
On the Privacy Risks of Model Explanations
Reza Shokri
Martin Strobel
Yair Zick
MIACV
PILM
SILM
FAtt
6
36
0
29 Jun 2019
Improving performance of deep learning models with axiomatic attribution
  priors and expected gradients
Improving performance of deep learning models with axiomatic attribution priors and expected gradients
G. Erion
Joseph D. Janizek
Pascal Sturmfels
Scott M. Lundberg
Su-In Lee
OOD
BDL
FAtt
21
80
0
25 Jun 2019
Saliency-driven Word Alignment Interpretation for Neural Machine
  Translation
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Shuoyang Ding
Hainan Xu
Philipp Koehn
20
55
0
25 Jun 2019
Incorporating Priors with Feature Attribution on Text Classification
Incorporating Priors with Feature Attribution on Text Classification
Frederick Liu
Besim Avci
FAtt
FaML
31
120
0
19 Jun 2019
Explanations can be manipulated and geometry is to blame
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAML
FAtt
22
329
0
19 Jun 2019
Exact and Consistent Interpretation of Piecewise Linear Models Hidden
  behind APIs: A Closed Form Solution
Exact and Consistent Interpretation of Piecewise Linear Models Hidden behind APIs: A Closed Form Solution
Zicun Cong
Lingyang Chu
Lanjun Wang
X. Hu
J. Pei
197
5
0
17 Jun 2019
Issues with post-hoc counterfactual explanations: a discussion
Issues with post-hoc counterfactual explanations: a discussion
Thibault Laugel
Marie-Jeanne Lesot
Christophe Marsala
Marcin Detyniecki
CML
107
44
0
11 Jun 2019
XRAI: Better Attributions Through Regions
XRAI: Better Attributions Through Regions
A. Kapishnikov
Tolga Bolukbasi
Fernanda Viégas
Michael Terry
FAtt
XAI
20
212
0
06 Jun 2019
c-Eval: A Unified Metric to Evaluate Feature-based Explanations via
  Perturbation
c-Eval: A Unified Metric to Evaluate Feature-based Explanations via Perturbation
Minh Nhat Vu
Truc D. T. Nguyen
Nhathai Phan
Ralucca Gera
My T. Thai
AAML
FAtt
20
22
0
05 Jun 2019
Interpretable and Differentially Private Predictions
Interpretable and Differentially Private Predictions
Frederik Harder
Matthias Bauer
Mijung Park
FAtt
6
52
0
05 Jun 2019
Adversarial Robustness as a Prior for Learned Representations
Adversarial Robustness as a Prior for Learned Representations
Logan Engstrom
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Brandon Tran
A. Madry
OOD
AAML
27
63
0
03 Jun 2019
Explainability Techniques for Graph Convolutional Networks
Explainability Techniques for Graph Convolutional Networks
Federico Baldassarre
Hossein Azizpour
GNN
FAtt
22
264
0
31 May 2019
Learning Representations by Humans, for Humans
Learning Representations by Humans, for Humans
Sophie Hilgard
Nir Rosenfeld
M. Banaji
Jack Cao
David C. Parkes
OCL
HAI
AI4CE
34
29
0
29 May 2019
Certifiably Robust Interpretation in Deep Learning
Certifiably Robust Interpretation in Deep Learning
Alexander Levine
Sahil Singla
S. Feizi
FAtt
AAML
26
63
0
28 May 2019
A Rate-Distortion Framework for Explaining Neural Network Decisions
A Rate-Distortion Framework for Explaining Neural Network Decisions
Jan Macdonald
S. Wäldchen
Sascha Hauch
Gitta Kutyniok
13
40
0
27 May 2019
Interpreting Adversarially Trained Convolutional Neural Networks
Interpreting Adversarially Trained Convolutional Neural Networks
Tianyuan Zhang
Zhanxing Zhu
AAML
GAN
FAtt
28
157
0
23 May 2019
What Do Adversarially Robust Models Look At?
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
19
5
0
19 May 2019
On the Connection Between Adversarial Robustness and Saliency Map
  Interpretability
On the Connection Between Adversarial Robustness and Saliency Map Interpretability
Christian Etmann
Sebastian Lunz
Peter Maass
Carola-Bibiane Schönlieb
AAML
FAtt
15
156
0
10 May 2019
Embedding Human Knowledge into Deep Neural Network via Attention Map
Embedding Human Knowledge into Deep Neural Network via Attention Map
Masahiro Mitsuhara
Hiroshi Fukui
Yusuke Sakashita
Takanori Ogata
Tsubasa Hirakawa
Takayoshi Yamashita
H. Fujiyoshi
16
72
0
09 May 2019
Adversarial Examples Are Not Bugs, They Are Features
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
A. Madry
SILM
28
1,807
0
06 May 2019
Temporal Graph Convolutional Networks for Automatic Seizure Detection
Temporal Graph Convolutional Networks for Automatic Seizure Detection
Ian Covert
B. Krishnan
I. Najm
Jiening Zhan
Matthew Shore
J. Hixson
M. Po
8
67
0
03 May 2019
Full-Gradient Representation for Neural Network Visualization
Full-Gradient Representation for Neural Network Visualization
Suraj Srinivas
F. Fleuret
MILM
FAtt
21
268
0
02 May 2019
Dropping Pixels for Adversarial Robustness
Dropping Pixels for Adversarial Robustness
Hossein Hosseini
Sreeram Kannan
Radha Poovendran
14
16
0
01 May 2019
Beyond Explainability: Leveraging Interpretability for Improved
  Adversarial Learning
Beyond Explainability: Leveraging Interpretability for Improved Adversarial Learning
Devinder Kumar
Ibrahim Ben Daya
Kanav Vats
Jeffery Feng
Graham W. Taylor
Alexander Wong
AAML
14
1
0
21 Apr 2019
Software and application patterns for explanation methods
Software and application patterns for explanation methods
Maximilian Alber
33
11
0
09 Apr 2019
Visualization of Convolutional Neural Networks for Monocular Depth
  Estimation
Visualization of Convolutional Neural Networks for Monocular Depth Estimation
Junjie Hu
Yan Zhang
Takayuki Okatani
MDE
33
83
0
06 Apr 2019
Summit: Scaling Deep Learning Interpretability by Visualizing Activation
  and Attribution Summarizations
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Fred Hohman
Haekyu Park
Caleb Robinson
Duen Horng Chau
FAtt
3DH
HAI
19
213
0
04 Apr 2019
InfoMask: Masked Variational Latent Representation to Localize Chest
  Disease
InfoMask: Masked Variational Latent Representation to Localize Chest Disease
Saeid Asgari Taghanaki
Mohammad Havaei
T. Berthier
Francis Dutil
Lisa Di-Jorio
Ghassan Hamarneh
Yoshua Bengio
15
42
0
28 Mar 2019
Bridging Adversarial Robustness and Gradient Interpretability
Bridging Adversarial Robustness and Gradient Interpretability
Beomsu Kim
Junghoon Seo
Taegyun Jeon
AAML
16
39
0
27 Mar 2019
Activation Analysis of a Byte-Based Deep Neural Network for Malware
  Classification
Activation Analysis of a Byte-Based Deep Neural Network for Malware Classification
Scott E. Coull
Christopher Gardner
16
50
0
12 Mar 2019
Aggregating explanation methods for stable and robust explainability
Aggregating explanation methods for stable and robust explainability
Laura Rieger
Lars Kai Hansen
AAML
FAtt
37
11
0
01 Mar 2019
Explaining a black-box using Deep Variational Information Bottleneck
  Approach
Explaining a black-box using Deep Variational Information Bottleneck Approach
Seo-Jin Bang
P. Xie
Heewook Lee
Wei Wu
Eric Xing
XAI
FAtt
14
75
0
19 Feb 2019
Regularizing Black-box Models for Improved Interpretability
Regularizing Black-box Models for Improved Interpretability
Gregory Plumb
Maruan Al-Shedivat
Ángel Alexander Cabrera
Adam Perer
Eric Xing
Ameet Talwalkar
AAML
24
79
0
18 Feb 2019
Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency
  Maps
Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps
Beomsu Kim
Junghoon Seo
Seunghyun Jeon
Jamyoung Koo
J. Choe
Taegyun Jeon
FAtt
32
69
0
13 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
17
1,992
0
08 Feb 2019
Towards Automatic Concept-based Explanations
Towards Automatic Concept-based Explanations
Amirata Ghorbani
James Wexler
James Zou
Been Kim
FAtt
LRM
38
19
0
07 Feb 2019
Previous
123...21222324
Next