ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.03825
  4. Cited By
SmoothGrad: removing noise by adding noise

SmoothGrad: removing noise by adding noise

12 June 2017
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
    FAtt
    ODL
ArXivPDFHTML

Papers citing "SmoothGrad: removing noise by adding noise"

50 / 1,161 papers shown
Title
$¶$ILCRO: Making Importance Landscapes Flat Again
¶¶¶ILCRO: Making Importance Landscapes Flat Again
Vincent Moens
Simiao Yu
G. Salimi-Khorshidi
9
0
0
27 Jan 2020
Learning Preference-Based Similarities from Face Images using Siamese
  Multi-Task CNNs
Learning Preference-Based Similarities from Face Images using Siamese Multi-Task CNNs
N. Gessert
Alexander Schlaefer
CVBM
18
1
0
25 Jan 2020
SAUNet: Shape Attentive U-Net for Interpretable Medical Image
  Segmentation
SAUNet: Shape Attentive U-Net for Interpretable Medical Image Segmentation
Jesse Sun
Fatemeh Darbeha
M. Zaidi
Bo Wang
AAML
19
110
0
21 Jan 2020
An Adversarial Approach for the Robust Classification of Pneumonia from
  Chest Radiographs
An Adversarial Approach for the Robust Classification of Pneumonia from Chest Radiographs
Joseph D. Janizek
G. Erion
A. DeGrave
Su-In Lee
OOD
MedIm
30
29
0
13 Jan 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAML
AI4CE
38
300
0
08 Jan 2020
Restricting the Flow: Information Bottlenecks for Attribution
Restricting the Flow: Information Bottlenecks for Attribution
Karl Schulz
Leon Sixt
Federico Tombari
Tim Landgraf
FAtt
14
182
0
02 Jan 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDL
FAtt
XAI
13
132
0
20 Dec 2019
Explaining Classifiers using Adversarial Perturbations on the Perceptual
  Ball
Explaining Classifiers using Adversarial Perturbations on the Perceptual Ball
Andrew Elliott
Stephen Law
Chris Russell
AAML
12
4
0
19 Dec 2019
Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps
  for Deep Reinforcement Learning
Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps for Deep Reinforcement Learning
Akanksha Atrey
Kaleigh Clary
David D. Jensen
FAtt
LRM
19
90
0
09 Dec 2019
An Empirical Study on the Relation between Network Interpretability and
  Adversarial Robustness
An Empirical Study on the Relation between Network Interpretability and Adversarial Robustness
Adam Noack
Isaac Ahern
Dejing Dou
Boyang Albert Li
OOD
AAML
18
10
0
07 Dec 2019
A Step Towards Exposing Bias in Trained Deep Convolutional Neural
  Network Models
A Step Towards Exposing Bias in Trained Deep Convolutional Neural Network Models
Daniel Omeiza
FAtt
16
0
0
03 Dec 2019
Automated Dependence Plots
Automated Dependence Plots
David I. Inouye
Liu Leqi
Joon Sik Kim
Bryon Aragam
Pradeep Ravikumar
12
1
0
02 Dec 2019
A Programmatic and Semantic Approach to Explaining and DebuggingNeural
  Network Based Object Detectors
A Programmatic and Semantic Approach to Explaining and DebuggingNeural Network Based Object Detectors
Edward J. Kim
D. Gopinath
C. Păsăreanu
S. Seshia
10
26
0
01 Dec 2019
Attributional Robustness Training using Input-Gradient Spatial Alignment
Attributional Robustness Training using Input-Gradient Spatial Alignment
M. Singh
Nupur Kumari
Puneet Mangla
Abhishek Sinha
V. Balasubramanian
Balaji Krishnamurthy
OOD
29
10
0
29 Nov 2019
A Case for the Score: Identifying Image Anomalies using Variational
  Autoencoder Gradients
A Case for the Score: Identifying Image Anomalies using Variational Autoencoder Gradients
David Zimmerer
Jens Petersen
Simon A. A. Kohl
Klaus H. Maier-Hein
DRL
11
22
0
28 Nov 2019
Analysis of Explainers of Black Box Deep Neural Networks for Computer
  Vision: A Survey
Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey
Vanessa Buhrmester
David Münch
Michael Arens
MLAU
FaML
XAI
AAML
21
354
0
27 Nov 2019
Hearing Lips: Improving Lip Reading by Distilling Speech Recognizers
Hearing Lips: Improving Lip Reading by Distilling Speech Recognizers
Ya Zhao
Rui Xu
Xinchao Wang
Peng Hou
Haihong Tang
Xiuming Zhang
9
89
0
26 Nov 2019
Efficient Saliency Maps for Explainable AI
Efficient Saliency Maps for Explainable AI
T. Nathan Mundhenk
Barry Y. Chen
Gerald Friedland
XAI
FAtt
21
73
0
26 Nov 2019
Analysis of Deep Networks for Monocular Depth Estimation Through
  Adversarial Attacks with Proposal of a Defense Method
Analysis of Deep Networks for Monocular Depth Estimation Through Adversarial Attacks with Proposal of a Defense Method
Junjie Hu
Takayuki Okatani
AAML
MDE
35
15
0
20 Nov 2019
Signed Input Regularization
Signed Input Regularization
Saeid Asgari Taghanaki
Kumar Abhishek
Ghassan Hamarneh
AAML
10
1
0
16 Nov 2019
Streaming convolutional neural networks for end-to-end learning with
  multi-megapixel images
Streaming convolutional neural networks for end-to-end learning with multi-megapixel images
H. Pinckaers
Bram van Ginneken
G. Litjens
MedIm
27
94
0
11 Nov 2019
ERASER: A Benchmark to Evaluate Rationalized NLP Models
ERASER: A Benchmark to Evaluate Rationalized NLP Models
Jay DeYoung
Sarthak Jain
Nazneen Rajani
Eric P. Lehman
Caiming Xiong
R. Socher
Byron C. Wallace
50
627
0
08 Nov 2019
XDeep: An Interpretation Tool for Deep Neural Networks
XDeep: An Interpretation Tool for Deep Neural Networks
Fan Yang
Zijian Zhang
Haofan Wang
Yuening Li
Xia Hu
XAI
HAI
9
3
0
04 Nov 2019
Leveraging Pretrained Image Classifiers for Language-Based Segmentation
Leveraging Pretrained Image Classifiers for Language-Based Segmentation
David Golub
Ahmed El-Kishky
Roberto Martín-Martín
VLM
14
4
0
03 Nov 2019
Concept Saliency Maps to Visualize Relevant Features in Deep Generative
  Models
Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models
L. Brocki
N. C. Chung
FAtt
28
21
0
29 Oct 2019
Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural
  Networks
Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks
Aya Abdelsalam Ismail
Mohamed K. Gunady
L. Pessoa
H. C. Bravo
S. Feizi
AI4TS
25
50
0
27 Oct 2019
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
Patrick Schwab
W. Karlen
FAtt
CML
40
205
0
27 Oct 2019
Seeing What a GAN Cannot Generate
Seeing What a GAN Cannot Generate
David Bau
Jun-Yan Zhu
Jonas Wulff
William S. Peebles
Hendrik Strobelt
Bolei Zhou
Antonio Torralba
GAN
48
308
0
24 Oct 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
39
6,119
0
22 Oct 2019
Towards Best Practice in Explaining Neural Network Decisions with LRP
Towards Best Practice in Explaining Neural Network Decisions with LRP
M. Kohlbrenner
Alexander Bauer
Shinichi Nakajima
Alexander Binder
Wojciech Samek
Sebastian Lapuschkin
22
148
0
22 Oct 2019
Contextual Prediction Difference Analysis for Explaining Individual
  Image Classifications
Contextual Prediction Difference Analysis for Explaining Individual Image Classifications
Jindong Gu
Volker Tresp
FAtt
26
8
0
21 Oct 2019
Semantics for Global and Local Interpretation of Deep Neural Networks
Semantics for Global and Local Interpretation of Deep Neural Networks
Jindong Gu
Volker Tresp
AI4CE
30
14
0
21 Oct 2019
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Ruth C. Fong
Mandela Patrick
Andrea Vedaldi
AAML
25
411
0
18 Oct 2019
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
Do Explanations Reflect Decisions? A Machine-centric Strategy to
  Quantify the Performance of Explainability Algorithms
Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms
Z. Q. Lin
M. Shafiee
S. Bochkarev
Michael St. Jules
Xiao Yu Wang
A. Wong
FAtt
29
80
0
16 Oct 2019
Explaining image classifiers by removing input features using generative
  models
Explaining image classifiers by removing input features using generative models
Chirag Agarwal
Anh Totti Nguyen
FAtt
28
15
0
09 Oct 2019
Interpretable Disentanglement of Neural Networks by Extracting
  Class-Specific Subnetwork
Interpretable Disentanglement of Neural Networks by Extracting Class-Specific Subnetwork
Yulong Wang
Xiaolin Hu
Hang Su
FAtt
14
1
0
07 Oct 2019
Testing and verification of neural-network-based safety-critical control
  software: A systematic literature review
Testing and verification of neural-network-based safety-critical control software: A systematic literature review
Jin Zhang
Jingyue Li
25
47
0
05 Oct 2019
Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural
  Networks
Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks
Mehdi Neshat
Zifan Wang
Bradley Alexander
Fan Yang
Zijian Zhang
Sirui Ding
Markus Wagner
Xia Hu
FAtt
14
1,049
0
03 Oct 2019
Oblique Decision Trees from Derivatives of ReLU Networks
Oblique Decision Trees from Derivatives of ReLU Networks
Guang-He Lee
Tommi Jaakkola
30
23
0
30 Sep 2019
Towards Explainable Artificial Intelligence
Towards Explainable Artificial Intelligence
Wojciech Samek
K. Müller
XAI
32
436
0
26 Sep 2019
Robust Local Features for Improving the Generalization of Adversarial
  Training
Robust Local Features for Improving the Generalization of Adversarial Training
Chuanbiao Song
Kun He
Jiadong Lin
Liwei Wang
J. Hopcroft
OOD
AAML
6
68
0
23 Sep 2019
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
Eric Wallace
Jens Tuyls
Junlin Wang
Sanjay Subramanian
Matt Gardner
Sameer Singh
MILM
20
137
0
19 Sep 2019
Identifying Pediatric Vascular Anomalies With Deep Learning
Identifying Pediatric Vascular Anomalies With Deep Learning
Justin Chan
Sharat Raju
Randall Bly
J. Perkins
Shyamnath Gollakota
30
2
0
16 Sep 2019
X-ToM: Explaining with Theory-of-Mind for Gaining Justified Human Trust
X-ToM: Explaining with Theory-of-Mind for Gaining Justified Human Trust
Arjun Reddy Akula
Changsong Liu
Sari Saba-Sadiya
Hongjing Lu
S. Todorovic
J. Chai
Song-Chun Zhu
24
18
0
15 Sep 2019
NormLime: A New Feature Importance Metric for Explaining Deep Neural
  Networks
NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks
Isaac Ahern
Adam Noack
Luis Guzman-Nateras
Dejing Dou
Boyang Albert Li
Jun Huan
FAtt
11
39
0
10 Sep 2019
Understanding Bias in Machine Learning
Understanding Bias in Machine Learning
Jindong Gu
Daniela Oelke
AI4CE
FaML
16
22
0
02 Sep 2019
Saliency Methods for Explaining Adversarial Attacks
Saliency Methods for Explaining Adversarial Attacks
Jindong Gu
Volker Tresp
FAtt
AAML
8
30
0
22 Aug 2019
Interpretable and Fine-Grained Visual Explanations for Convolutional
  Neural Networks
Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks
Jörg Wagner
Jan M. Köhler
Tobias Gindele
Leon Hetzel
Thaddäus Wiedemer
Sven Behnke
AAML
FAtt
21
121
0
07 Aug 2019
Free-Lunch Saliency via Attention in Atari Agents
Free-Lunch Saliency via Attention in Atari Agents
Dmitry Nikulin
A. Ianina
Vladimir Aliev
Sergey I. Nikolenko
FAtt
20
24
0
07 Aug 2019
Previous
123...2021222324
Next