ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.03825
  4. Cited By
SmoothGrad: removing noise by adding noise

SmoothGrad: removing noise by adding noise

12 June 2017
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
    FAtt
    ODL
ArXivPDFHTML

Papers citing "SmoothGrad: removing noise by adding noise"

50 / 1,161 papers shown
Title
A Test Statistic Estimation-based Approach for Establishing
  Self-interpretable CNN-based Binary Classifiers
A Test Statistic Estimation-based Approach for Establishing Self-interpretable CNN-based Binary Classifiers
S. Sengupta
M. Anastasio
MedIm
33
6
0
13 Mar 2023
RotoGBML: Towards Out-of-Distribution Generalization for Gradient-Based
  Meta-Learning
RotoGBML: Towards Out-of-Distribution Generalization for Gradient-Based Meta-Learning
Min Zhang
Zifeng Zhuang
Zhitao Wang
Donglin Wang
Wen-Bin Li
46
5
0
12 Mar 2023
Use Perturbations when Learning from Explanations
Use Perturbations when Learning from Explanations
Juyeon Heo
Vihari Piratla
Matthew Wicker
Adrian Weller
AAML
40
1
0
11 Mar 2023
Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
  Contemporary Survey
Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A Contemporary Survey
Yulong Wang
Tong Sun
Shenghong Li
Xinnan Yuan
W. Ni
Ekram Hossain
H. Vincent Poor
AAML
31
18
0
11 Mar 2023
Towards Trust of Explainable AI in Thyroid Nodule Diagnosis
Towards Trust of Explainable AI in Thyroid Nodule Diagnosis
Hung Truong Thanh Nguyen
Van Binh Truong
V. Nguyen
Quoc Hung Cao
Quoc Khanh Nguyen
14
13
0
08 Mar 2023
CoRTX: Contrastive Framework for Real-time Explanation
CoRTX: Contrastive Framework for Real-time Explanation
Yu-Neng Chuang
Guanchu Wang
Fan Yang
Quan-Gen Zhou
Pushkar Tripathi
Xuanting Cai
Xia Hu
46
20
0
05 Mar 2023
DeepSeer: Interactive RNN Explanation and Debugging via State
  Abstraction
DeepSeer: Interactive RNN Explanation and Debugging via State Abstraction
Zhijie Wang
Yuheng Huang
D. Song
Lei Ma
Tianyi Zhang
HAI
58
5
0
02 Mar 2023
Feature Perturbation Augmentation for Reliable Evaluation of Importance
  Estimators in Neural Networks
Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators in Neural Networks
L. Brocki
N. C. Chung
FAtt
AAML
51
11
0
02 Mar 2023
Finding the right XAI method -- A Guide for the Evaluation and Ranking
  of Explainable AI Methods in Climate Science
Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science
P. Bommer
M. Kretschmer
Anna Hedström
Dilyara Bareeva
Marina M.-C. Höhne
54
38
0
01 Mar 2023
SUNY: A Visual Interpretation Framework for Convolutional Neural
  Networks from a Necessary and Sufficient Perspective
SUNY: A Visual Interpretation Framework for Convolutional Neural Networks from a Necessary and Sufficient Perspective
Xiwei Xuan
Ziquan Deng
Hsuan-Tien Lin
Z. Kong
Kwan-Liu Ma
AAML
FAtt
37
2
0
01 Mar 2023
Single Image Backdoor Inversion via Robust Smoothed Classifiers
Single Image Backdoor Inversion via Robust Smoothed Classifiers
Mingjie Sun
Zico Kolter
AAML
23
12
0
01 Mar 2023
Multi-Layer Attention-Based Explainability via Transformers for Tabular
  Data
Multi-Layer Attention-Based Explainability via Transformers for Tabular Data
Andrea Trevino Gavito
Diego Klabjan
J. Utke
LMTD
25
3
0
28 Feb 2023
MDF-Net for abnormality detection by fusing X-rays with clinical data
MDF-Net for abnormality detection by fusing X-rays with clinical data
Chih-Jou Hsieh
Isabel Blanco Nobre
Sandra Costa Sousa
Chun Ouyang
M. Brereton
Jacinto C. Nascimento
Joaquim A. Jorge
Catarina Moreira
18
8
0
26 Feb 2023
Don't be fooled: label leakage in explanation methods and the importance
  of their quantitative evaluation
Don't be fooled: label leakage in explanation methods and the importance of their quantitative evaluation
N. Jethani
A. Saporta
Rajesh Ranganath
FAtt
31
11
0
24 Feb 2023
Frequency and Scale Perspectives of Feature Extraction
Frequency and Scale Perspectives of Feature Extraction
Liangqi Zhang
Yihao Luo
Xiang Cao
Haibo Shen
Tianjiang Wang
21
0
0
24 Feb 2023
The Generalizability of Explanations
The Generalizability of Explanations
Hanxiao Tan
FAtt
18
1
0
23 Feb 2023
Non-Uniform Interpolation in Integrated Gradients for Low-Latency
  Explainable-AI
Non-Uniform Interpolation in Integrated Gradients for Low-Latency Explainable-AI
Ashwin Bhat
A. Raychowdhury
25
4
0
22 Feb 2023
TAX: Tendency-and-Assignment Explainer for Semantic Segmentation with
  Multi-Annotators
TAX: Tendency-and-Assignment Explainer for Semantic Segmentation with Multi-Annotators
Yuan Cheng
Zu-Yun Shiau
Fu-En Yang
Yu-Chiang Frank Wang
39
2
0
19 Feb 2023
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable
  Estimators with MetaQuantus
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Anna Hedström
P. Bommer
Kristoffer K. Wickstrom
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
37
21
0
14 Feb 2023
On The Coherence of Quantitative Evaluation of Visual Explanations
On The Coherence of Quantitative Evaluation of Visual Explanations
Benjamin Vandersmissen
José Oramas
XAI
FAtt
36
3
0
14 Feb 2023
Interpretable Diversity Analysis: Visualizing Feature Representations In
  Low-Cost Ensembles
Interpretable Diversity Analysis: Visualizing Feature Representations In Low-Cost Ensembles
Tim Whitaker
L. D. Whitley
13
0
0
12 Feb 2023
Towards a Deeper Understanding of Concept Bottleneck Models Through
  End-to-End Explanation
Towards a Deeper Understanding of Concept Bottleneck Models Through End-to-End Explanation
Jack Furby
Daniel Cunnington
Dave Braines
Alun D. Preece
22
6
0
07 Feb 2023
PAMI: partition input and aggregate outputs for model interpretation
PAMI: partition input and aggregate outputs for model interpretation
Wei Shi
Wentao Zhang
Weishi Zheng
Ruixuan Wang
FAtt
26
3
0
07 Feb 2023
Efficient XAI Techniques: A Taxonomic Survey
Efficient XAI Techniques: A Taxonomic Survey
Yu-Neng Chuang
Guanchu Wang
Fan Yang
Zirui Liu
Xuanting Cai
Mengnan Du
Xia Hu
24
32
0
07 Feb 2023
LAVA: Granular Neuron-Level Explainable AI for Alzheimer's Disease
  Assessment from Fundus Images
LAVA: Granular Neuron-Level Explainable AI for Alzheimer's Disease Assessment from Fundus Images
Nooshin Yousefzadeh
Charlie Tran
Adolfo Ramirez-Zamora
Jinghua Chen
R. Fang
My T. Thai
32
1
0
06 Feb 2023
Variational Information Pursuit for Interpretable Predictions
Variational Information Pursuit for Interpretable Predictions
Aditya Chattopadhyay
Kwan Ho Ryan Chan
B. Haeffele
D. Geman
René Vidal
DRL
24
11
0
06 Feb 2023
Interpreting Robustness Proofs of Deep Neural Networks
Interpreting Robustness Proofs of Deep Neural Networks
Debangshu Banerjee
Avaljot Singh
Gagandeep Singh
AAML
29
5
0
31 Jan 2023
A Survey of Explainable AI in Deep Visual Modeling: Methods and Metrics
A Survey of Explainable AI in Deep Visual Modeling: Methods and Metrics
Naveed Akhtar
XAI
VLM
35
7
0
31 Jan 2023
Diffusion Models as Artists: Are we Closing the Gap between Humans and
  Machines?
Diffusion Models as Artists: Are we Closing the Gap between Humans and Machines?
Victor Boutin
Thomas Fel
Lakshya Singhal
Rishav Mukherji
Akash Nagaraj
Julien Colin
Thomas Serre
DiffM
30
6
0
27 Jan 2023
Certified Interpretability Robustness for Class Activation Mapping
Certified Interpretability Robustness for Class Activation Mapping
Alex Gu
Tsui-Wei Weng
Pin-Yu Chen
Sijia Liu
Lucani E. Daniel
AAML
34
2
0
26 Jan 2023
Interpretable Out-Of-Distribution Detection Using Pattern Identification
Interpretable Out-Of-Distribution Detection Using Pattern Identification
Romain Xu-Darme
Julien Girard-Satabin
Darryl Hond
Gabriele Incorvaia
Zakaria Chihani
OODD
30
3
0
24 Jan 2023
Sanity checks and improvements for patch visualisation in
  prototype-based image classification
Sanity checks and improvements for patch visualisation in prototype-based image classification
Romain Xu-Darme
Georges Quénot
Zakaria Chihani
M. Rousset
10
3
0
20 Jan 2023
Interpreting CNN Predictions using Conditional Generative Adversarial
  Networks
Interpreting CNN Predictions using Conditional Generative Adversarial Networks
Akash Guna
Raul Benitez
Sikha
GAN
13
4
0
19 Jan 2023
Opti-CAM: Optimizing saliency maps for interpretability
Opti-CAM: Optimizing saliency maps for interpretability
Hanwei Zhang
Felipe Torres
R. Sicre
Yannis Avrithis
Stéphane Ayache
41
22
0
17 Jan 2023
Negative Flux Aggregation to Estimate Feature Attributions
Negative Flux Aggregation to Estimate Feature Attributions
X. Li
Deng Pan
Chengyin Li
Yao Qiang
D. Zhu
FAtt
8
6
0
17 Jan 2023
Rationalizing Predictions by Adversarial Information Calibration
Rationalizing Predictions by Adversarial Information Calibration
Lei Sha
Oana-Maria Camburu
Thomas Lukasiewicz
27
4
0
15 Jan 2023
Uncertainty Quantification for Local Model Explanations Without Model
  Access
Uncertainty Quantification for Local Model Explanations Without Model Access
Surin Ahn
J. Grana
Yafet Tamene
Kristian Holsheimer
FAtt
31
0
0
13 Jan 2023
MoreauGrad: Sparse and Robust Interpretation of Neural Networks via
  Moreau Envelope
MoreauGrad: Sparse and Robust Interpretation of Neural Networks via Moreau Envelope
Jingwei Zhang
Farzan Farnia
UQCV
36
3
0
08 Jan 2023
Explainability and Robustness of Deep Visual Classification Models
Explainability and Robustness of Deep Visual Classification Models
Jindong Gu
AAML
47
2
0
03 Jan 2023
NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
  Development Patterns of Preterm Infants
NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical Development Patterns of Preterm Infants
Chen Xue
Fan Wang
Yuanzhuo Zhu
Hui Li
Deyu Meng
Dinggang Shen
C. Lian
52
2
0
01 Jan 2023
ExploreADV: Towards exploratory attack for Neural Networks
ExploreADV: Towards exploratory attack for Neural Networks
Tianzuo Luo
Yuyi Zhong
S. Khoo
AAML
24
1
0
01 Jan 2023
Provable Robust Saliency-based Explanations
Provable Robust Saliency-based Explanations
Chao Chen
Chenghua Guo
Guixiang Ma
Ming Zeng
Xi Zhang
Sihong Xie
AAML
FAtt
38
0
0
28 Dec 2022
DeepCuts: Single-Shot Interpretability based Pruning for BERT
DeepCuts: Single-Shot Interpretability based Pruning for BERT
Jasdeep Singh Grover
Bhavesh Gawri
R. Manku
33
1
0
27 Dec 2022
Key Feature Replacement of In-Distribution Samples for
  Out-of-Distribution Detection
Key Feature Replacement of In-Distribution Samples for Out-of-Distribution Detection
Jaeyoung Kim
Seo Taek Kong
Dongbin Na
Kyu-Hwan Jung
OODD
24
4
0
26 Dec 2022
Impossibility Theorems for Feature Attribution
Impossibility Theorems for Feature Attribution
Blair Bilodeau
Natasha Jaques
Pang Wei Koh
Been Kim
FAtt
20
68
0
22 Dec 2022
DExT: Detector Explanation Toolkit
DExT: Detector Explanation Toolkit
Deepan Padmanabhan
Paul G. Plöger
Octavio Arriaga
Matias Valdenegro-Toro
38
2
0
21 Dec 2022
Bort: Towards Explainable Neural Networks with Bounded Orthogonal
  Constraint
Bort: Towards Explainable Neural Networks with Bounded Orthogonal Constraint
Borui Zhang
Wenzhao Zheng
Jie Zhou
Jiwen Lu
AAML
27
7
0
18 Dec 2022
Robust Explanation Constraints for Neural Networks
Robust Explanation Constraints for Neural Networks
Matthew Wicker
Juyeon Heo
Luca Costabello
Adrian Weller
FAtt
31
18
0
16 Dec 2022
On the Relationship Between Explanation and Prediction: A Causal View
On the Relationship Between Explanation and Prediction: A Causal View
Amir-Hossein Karimi
Krikamol Muandet
Simon Kornblith
Bernhard Schölkopf
Been Kim
FAtt
CML
40
14
0
13 Dec 2022
Comparing the Decision-Making Mechanisms by Transformers and CNNs via
  Explanation Methods
Comparing the Decision-Making Mechanisms by Transformers and CNNs via Explanation Methods
Ming-Xiu Jiang
Saeed Khorram
Li Fuxin
FAtt
27
9
0
13 Dec 2022
Previous
123...8910...222324
Next