ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.03825
  4. Cited By
SmoothGrad: removing noise by adding noise

SmoothGrad: removing noise by adding noise

12 June 2017
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
    FAtt
    ODL
ArXivPDFHTML

Papers citing "SmoothGrad: removing noise by adding noise"

50 / 1,161 papers shown
Title
Best of both worlds: local and global explanations with
  human-understandable concepts
Best of both worlds: local and global explanations with human-understandable concepts
Jessica Schrouff
Sebastien Baur
Shaobo Hou
Diana Mincu
Eric Loreaux
Ralph Blanes
James Wexler
Alan Karthikesalingam
Been Kim
FAtt
34
28
0
16 Jun 2021
SEEN: Sharpening Explanations for Graph Neural Networks using
  Explanations from Neighborhoods
SEEN: Sharpening Explanations for Graph Neural Networks using Explanations from Neighborhoods
Hyeoncheol Cho
Youngrock Oh
Eunjoo Jeon
FAtt
19
0
0
16 Jun 2021
Keep CALM and Improve Visual Feature Attribution
Keep CALM and Improve Visual Feature Attribution
Jae Myung Kim
Junsuk Choe
Zeynep Akata
Seong Joon Oh
FAtt
350
20
0
15 Jun 2021
Taxonomy of Machine Learning Safety: A Survey and Primer
Taxonomy of Machine Learning Safety: A Survey and Primer
Sina Mohseni
Haotao Wang
Zhiding Yu
Chaowei Xiao
Zhangyang Wang
J. Yadawa
31
31
0
09 Jun 2021
On the Lack of Robust Interpretability of Neural Text Classifiers
On the Lack of Robust Interpretability of Neural Text Classifiers
Muhammad Bilal Zafar
Michele Donini
Dylan Slack
Cédric Archambeau
Sanjiv Ranjan Das
K. Kenthapadi
AAML
16
21
0
08 Jun 2021
BR-NPA: A Non-Parametric High-Resolution Attention Model to improve the
  Interpretability of Attention
BR-NPA: A Non-Parametric High-Resolution Attention Model to improve the Interpretability of Attention
T. Gomez
Suiyi Ling
Thomas Fréour
Harold Mouchère
34
5
0
04 Jun 2021
Contrastive ACE: Domain Generalization Through Alignment of Causal
  Mechanisms
Contrastive ACE: Domain Generalization Through Alignment of Causal Mechanisms
Yunqi Wang
Furui Liu
Zhitang Chen
Qing Lian
Guangyong Chen
Jianye Hao
Yik-Chung Wu
OOD
CML
33
35
0
02 Jun 2021
The Out-of-Distribution Problem in Explainability and Search Methods for
  Feature Importance Explanations
The Out-of-Distribution Problem in Explainability and Search Methods for Feature Importance Explanations
Peter Hase
Harry Xie
Joey Tianyi Zhou
OODD
LRM
FAtt
29
91
0
01 Jun 2021
COV-ECGNET: COVID-19 detection using ECG trace images with deep
  convolutional neural network
COV-ECGNET: COVID-19 detection using ECG trace images with deep convolutional neural network
Tawsifur Rahman
A. Akinbi
M. Chowdhury
Tarik A. Rashid
Abdulkadir cSengur
Amith Khandakar
K. R. Islam
A. M. Ismael
24
77
0
01 Jun 2021
Distribution Matching for Rationalization
Distribution Matching for Rationalization
Yongfeng Huang
Yujun Chen
Yulun Du
Zhilin Yang
OOD
34
16
0
01 Jun 2021
DISSECT: Disentangled Simultaneous Explanations via Concept Traversals
DISSECT: Disentangled Simultaneous Explanations via Concept Traversals
Asma Ghandeharioun
Been Kim
Chun-Liang Li
Brendan Jou
B. Eoff
Rosalind W. Picard
AAML
35
53
0
31 May 2021
The effectiveness of feature attribution methods and its correlation
  with automatic evaluation scores
The effectiveness of feature attribution methods and its correlation with automatic evaluation scores
Giang Nguyen
Daeyoung Kim
Anh Totti Nguyen
FAtt
21
86
0
31 May 2021
Bounded logit attention: Learning to explain image classifiers
Bounded logit attention: Learning to explain image classifiers
Thomas Baumhauer
D. Slijepcevic
Matthias Zeppelzauer
FAtt
19
2
0
31 May 2021
Attention Flows are Shapley Value Explanations
Attention Flows are Shapley Value Explanations
Kawin Ethayarajh
Dan Jurafsky
FAtt
TDI
32
34
0
31 May 2021
EDDA: Explanation-driven Data Augmentation to Improve Explanation
  Faithfulness
EDDA: Explanation-driven Data Augmentation to Improve Explanation Faithfulness
Ruiwen Li
Zhibo Zhang
Jiani Li
C. Trabelsi
Scott Sanner
Jongseong Jang
Yeonjeong Jeong
Dongsub Shim
AAML
16
1
0
29 May 2021
A General Taylor Framework for Unifying and Revisiting Attribution Methods
Huiqi Deng
Na Zou
Mengnan Du
Weifu Chen
Guo-Can Feng
Xia Hu
TDI
FAtt
44
2
0
28 May 2021
Balancing Robustness and Sensitivity using Feature Contrastive Learning
Balancing Robustness and Sensitivity using Feature Contrastive Learning
Seungyeon Kim
Daniel Glasner
Srikumar Ramalingam
Cho-Jui Hsieh
Kishore Papineni
Sanjiv Kumar
25
1
0
19 May 2021
How to Explain Neural Networks: an Approximation Perspective
How to Explain Neural Networks: an Approximation Perspective
Hangcheng Dong
Bingguo Liu
Fengdong Chen
Dong Ye
Guodong Liu
FAtt
20
1
0
17 May 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A
  Systematic Survey of Surveys on Methods and Concepts
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
34
184
0
15 May 2021
Cause and Effect: Hierarchical Concept-based Explanation of Neural
  Networks
Cause and Effect: Hierarchical Concept-based Explanation of Neural Networks
Mohammad Nokhbeh Zaeem
Majid Komeili
CML
15
9
0
14 May 2021
Biometrics: Trust, but Verify
Biometrics: Trust, but Verify
Anil K. Jain
Debayan Deb
Joshua J. Engelsma
FaML
28
80
0
14 May 2021
Sanity Simulations for Saliency Methods
Sanity Simulations for Saliency Methods
Joon Sik Kim
Gregory Plumb
Ameet Talwalkar
FAtt
43
17
0
13 May 2021
What's wrong with this video? Comparing Explainers for Deepfake
  Detection
What's wrong with this video? Comparing Explainers for Deepfake Detection
Samuele Pino
Mark J. Carman
Paolo Bestagini
AAML
20
7
0
12 May 2021
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Eric Wong
Shibani Santurkar
A. Madry
FAtt
22
88
0
11 May 2021
Improving Molecular Graph Neural Network Explainability with
  Orthonormalization and Induced Sparsity
Improving Molecular Graph Neural Network Explainability with Orthonormalization and Induced Sparsity
Ryan Henderson
Djork-Arné Clevert
F. Montanari
33
26
0
11 May 2021
Do Concept Bottleneck Models Learn as Intended?
Do Concept Bottleneck Models Learn as Intended?
Andrei Margeloiu
Matthew Ashman
Umang Bhatt
Yanzhi Chen
M. Jamnik
Adrian Weller
SLR
25
92
0
10 May 2021
Partially Interpretable Estimators (PIE): Black-Box-Refined
  Interpretable Machine Learning
Partially Interpretable Estimators (PIE): Black-Box-Refined Interpretable Machine Learning
Tong Wang
Jingyi Yang
Yunyi Li
Boxiang Wang
FAtt
20
5
0
06 May 2021
Where and When: Space-Time Attention for Audio-Visual Explanations
Where and When: Space-Time Attention for Audio-Visual Explanations
Yanbei Chen
Thomas Hummel
A. Sophia Koepke
Zeynep Akata
14
3
0
04 May 2021
Canonical Saliency Maps: Decoding Deep Face Models
Canonical Saliency Maps: Decoding Deep Face Models
Thrupthi Ann
S. M. I. C. V. Balasubramanian
M. Jawahar
CVBM
24
8
0
04 May 2021
LFI-CAM: Learning Feature Importance for Better Visual Explanation
LFI-CAM: Learning Feature Importance for Better Visual Explanation
Kwang Hee Lee
Chaewon Park
J. Oh
Nojun Kwak
FAtt
32
27
0
03 May 2021
Explanation-Based Human Debugging of NLP Models: A Survey
Explanation-Based Human Debugging of NLP Models: A Survey
Piyawat Lertvittayakumjorn
Francesca Toni
LRM
42
79
0
30 Apr 2021
Interpretable Semantic Photo Geolocation
Interpretable Semantic Photo Geolocation
Jonas Theiner
Eric Müller-Budack
Ralph Ewerth
26
30
0
30 Apr 2021
Inspect, Understand, Overcome: A Survey of Practical Methods for AI
  Safety
Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
Sebastian Houben
Stephanie Abrecht
Maram Akila
Andreas Bär
Felix Brockherde
...
Serin Varghese
Michael Weber
Sebastian J. Wirkert
Tim Wirtz
Matthias Woehrle
AAML
13
58
0
29 Apr 2021
Do Feature Attribution Methods Correctly Attribute Features?
Do Feature Attribution Methods Correctly Attribute Features?
Yilun Zhou
Serena Booth
Marco Tulio Ribeiro
J. Shah
FAtt
XAI
38
132
0
27 Apr 2021
Instance-wise Causal Feature Selection for Model Interpretation
Instance-wise Causal Feature Selection for Model Interpretation
Pranoy Panda
Sai Srinivas Kancheti
V. Balasubramanian
CML
50
16
0
26 Apr 2021
Exploiting Explanations for Model Inversion Attacks
Exploiting Explanations for Model Inversion Attacks
Xu Zhao
Wencan Zhang
Xiao Xiao
Brian Y. Lim
MIACV
34
82
0
26 Apr 2021
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Darius Afchar
Romain Hennequin
Vincent Guigue
FAtt
33
20
0
26 Apr 2021
Semiotic Aggregation in Deep Learning
Semiotic Aggregation in Deep Learning
Bogdan Musat
Razvan Andonie
FAtt
25
6
0
22 Apr 2021
Improving Attribution Methods by Learning Submodular Functions
Improving Attribution Methods by Learning Submodular Functions
Piyushi Manupriya
Tarun Ram Menta
S. Jagarlapudi
V. Balasubramanian
TDI
30
6
0
19 Apr 2021
Towards Human-Understandable Visual Explanations:Imperceptible
  High-frequency Cues Can Better Be Removed
Towards Human-Understandable Visual Explanations:Imperceptible High-frequency Cues Can Better Be Removed
Kaili Wang
José Oramas
Tinne Tuytelaars
AAML
27
2
0
16 Apr 2021
Deep Stable Learning for Out-Of-Distribution Generalization
Deep Stable Learning for Out-Of-Distribution Generalization
Xingxuan Zhang
Peng Cui
Renzhe Xu
Linjun Zhou
Yue He
Zheyan Shen
OOD
38
250
0
16 Apr 2021
Ridge Regression Neural Network for Pediatric Bone Age Assessment
Ridge Regression Neural Network for Pediatric Bone Age Assessment
Ibrahim Salim
A. B. Hamza
19
25
0
15 Apr 2021
Vision Transformer using Low-level Chest X-ray Feature Corpus for
  COVID-19 Diagnosis and Severity Quantification
Vision Transformer using Low-level Chest X-ray Feature Corpus for COVID-19 Diagnosis and Severity Quantification
Sangjoon Park
Gwanghyun Kim
Y. Oh
J. Seo
Sang Min Lee
Jin Hwan Kim
Sungjun Moon
Jae-Kwang Lim
Jong Chul Ye
ViT
MedIm
58
97
0
15 Apr 2021
Mutual Information Preserving Back-propagation: Learn to Invert for
  Faithful Attribution
Mutual Information Preserving Back-propagation: Learn to Invert for Faithful Attribution
Huiqi Deng
Na Zou
Weifu Chen
Guo-Can Feng
Mengnan Du
Xia Hu
FAtt
39
6
0
14 Apr 2021
Evaluating Saliency Methods for Neural Language Models
Evaluating Saliency Methods for Neural Language Models
Shuoyang Ding
Philipp Koehn
FAtt
XAI
23
54
0
12 Apr 2021
A-FMI: Learning Attributions from Deep Networks via Feature Map
  Importance
A-FMI: Learning Attributions from Deep Networks via Feature Map Importance
An Zhang
Xiang Wang
Chengfang Fang
Jie Shi
Tat-Seng Chua
Zehua Chen
FAtt
32
3
0
12 Apr 2021
Enhancing Deep Neural Network Saliency Visualizations with Gradual
  Extrapolation
Enhancing Deep Neural Network Saliency Visualizations with Gradual Extrapolation
Tomasz Szandała
FAtt
24
4
0
11 Apr 2021
Individual Explanations in Machine Learning Models: A Survey for
  Practitioners
Individual Explanations in Machine Learning Models: A Survey for Practitioners
Alfredo Carrillo
Luis F. Cantú
Alejandro Noriega
FaML
24
15
0
09 Apr 2021
Deep Interpretable Models of Theory of Mind
Deep Interpretable Models of Theory of Mind
Ini Oguntola
Dana Hughes
Katia Sycara
HAI
33
26
0
07 Apr 2021
White Box Methods for Explanations of Convolutional Neural Networks in
  Image Classification Tasks
White Box Methods for Explanations of Convolutional Neural Networks in Image Classification Tasks
Meghna P. Ayyar
J. Benois-Pineau
A. Zemmari
FAtt
30
17
0
06 Apr 2021
Previous
123...151617...222324
Next