ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.03825
  4. Cited By
SmoothGrad: removing noise by adding noise

SmoothGrad: removing noise by adding noise

12 June 2017
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
    FAtt
    ODL
ArXivPDFHTML

Papers citing "SmoothGrad: removing noise by adding noise"

50 / 1,161 papers shown
Title
"Is your explanation stable?": A Robustness Evaluation Framework for
  Feature Attribution
"Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution
Yuyou Gan
Yuhao Mao
Xuhong Zhang
S. Ji
Yuwen Pu
Meng Han
Jianwei Yin
Ting Wang
FAtt
AAML
12
15
0
05 Sep 2022
Generating detailed saliency maps using model-agnostic methods
Generating detailed saliency maps using model-agnostic methods
Maciej Sakowicz
FAtt
23
0
0
04 Sep 2022
Deep Stable Representation Learning on Electronic Health Records
Deep Stable Representation Learning on Electronic Health Records
Yingtao Luo
Zhaocheng Liu
Qiang Liu
OOD
BDL
CML
41
3
0
03 Sep 2022
Exploring Gradient-based Multi-directional Controls in GANs
Exploring Gradient-based Multi-directional Controls in GANs
Zikun Chen
R. Jiang
Brendan Duke
Han Zhao
P. Aarabi
23
10
0
01 Sep 2022
Concept Gradient: Concept-based Interpretation Without Linear Assumption
Concept Gradient: Concept-based Interpretation Without Linear Assumption
Andrew Bai
Chih-Kuan Yeh
Pradeep Ravikumar
Neil Y. C. Lin
Cho-Jui Hsieh
30
15
0
31 Aug 2022
A Deep Perceptual Measure for Lens and Camera Calibration
A Deep Perceptual Measure for Lens and Camera Calibration
Yannick Hold-Geoffroy
Dominique Piché-Meunier
Kalyan Sunkavalli
Jean-Charles Bazin
Franccois Rameau
Jean-François Lalonde
HAI
19
10
0
25 Aug 2022
ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers
  for Interpretable Image Recognition
ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers for Interpretable Image Recognition
Mengqi Xue
Qihan Huang
Haofei Zhang
Lechao Cheng
Mingli Song
Ming-hui Wu
Mingli Song
ViT
35
52
0
22 Aug 2022
Inferring Sensitive Attributes from Model Explanations
Inferring Sensitive Attributes from Model Explanations
Vasisht Duddu
A. Boutet
MIACV
SILM
24
16
0
21 Aug 2022
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Pedro C. Neto
Tiago B. Gonccalves
João Ribeiro Pinto
W. Silva
Ana F. Sequeira
Arun Ross
Jaime S. Cardoso
XAI
43
12
0
19 Aug 2022
UKP-SQuARE v2: Explainability and Adversarial Attacks for Trustworthy QA
UKP-SQuARE v2: Explainability and Adversarial Attacks for Trustworthy QA
Rachneet Sachdeva
Haritz Puerto
Tim Baumgärtner
Sewin Tariverdian
Hao Zhang
Kexin Wang
H. Saad
Leonardo F. R. Ribeiro
Iryna Gurevych
AAML
23
2
0
19 Aug 2022
Transcending XAI Algorithm Boundaries through End-User-Inspired Design
Transcending XAI Algorithm Boundaries through End-User-Inspired Design
Weina Jin
Jianyu Fan
D. Gromala
Philippe Pasquier
Xiaoxiao Li
Ghassan Hamarneh
28
3
0
18 Aug 2022
Visual Explanation of Deep Q-Network for Robot Navigation by Fine-tuning
  Attention Branch
Visual Explanation of Deep Q-Network for Robot Navigation by Fine-tuning Attention Branch
Yuya Maruyama
Hiroshi Fukui
Tsubasa Hirakawa
Takayoshi Yamashita
H. Fujiyoshi
K. Sugiura
45
1
0
18 Aug 2022
The Weighting Game: Evaluating Quality of Explainability Methods
The Weighting Game: Evaluating Quality of Explainability Methods
Lassi Raatikainen
Esa Rahtu
FAtt
XAI
34
4
0
12 Aug 2022
Comparing Baseline Shapley and Integrated Gradients for Local
  Explanation: Some Additional Insights
Comparing Baseline Shapley and Integrated Gradients for Local Explanation: Some Additional Insights
Tianshu Feng
Zhipu Zhou
Tarun Joshi
V. Nair
FAtt
25
4
0
12 Aug 2022
E Pluribus Unum Interpretable Convolutional Neural Networks
E Pluribus Unum Interpretable Convolutional Neural Networks
George Dimas
Eirini Cholopoulou
D. Iakovidis
28
3
0
10 Aug 2022
Shap-CAM: Visual Explanations for Convolutional Neural Networks based on
  Shapley Value
Shap-CAM: Visual Explanations for Convolutional Neural Networks based on Shapley Value
Quan Zheng
Ziwei Wang
Jie Zhou
Jiwen Lu
FAtt
33
31
0
07 Aug 2022
Generalizability Analysis of Graph-based Trajectory Predictor with
  Vectorized Representation
Generalizability Analysis of Graph-based Trajectory Predictor with Vectorized Representation
Juanwu Lu
Wei Zhan
Masayoshi Tomizuka
Yeping Hu
27
6
0
06 Aug 2022
An Interpretability Evaluation Benchmark for Pre-trained Language Models
An Interpretability Evaluation Benchmark for Pre-trained Language Models
Ya-Ming Shen
Lijie Wang
Ying-Cong Chen
Xinyan Xiao
Jing Liu
Hua Wu
42
4
0
28 Jul 2022
Adaptive occlusion sensitivity analysis for visually explaining video
  recognition networks
Adaptive occlusion sensitivity analysis for visually explaining video recognition networks
Tomoki Uchiyama
Naoya Sogi
S. Iizuka
Koichiro Niinuma
Kazuhiro Fukui
24
2
0
26 Jul 2022
Lazy Estimation of Variable Importance for Large Neural Networks
Lazy Estimation of Variable Importance for Large Neural Networks
Yue Gao
Abby Stevens
Rebecca Willett
Garvesh Raskutti
43
4
0
19 Jul 2022
Verifying Attention Robustness of Deep Neural Networks against Semantic
  Perturbations
Verifying Attention Robustness of Deep Neural Networks against Semantic Perturbations
S. Munakata
Caterina Urban
Haruki Yokoyama
Koji Yamamoto
Kazuki Munakata
AAML
24
4
0
13 Jul 2022
Jacobian Norm with Selective Input Gradient Regularization for Improved
  and Interpretable Adversarial Defense
Jacobian Norm with Selective Input Gradient Regularization for Improved and Interpretable Adversarial Defense
Deyin Liu
Lin Wu
Haifeng Zhao
F. Boussaïd
Bennamoun
Xianghua Xie
AAML
12
3
0
09 Jul 2022
TalkToModel: Explaining Machine Learning Models with Interactive Natural
  Language Conversations
TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations
Dylan Slack
Satyapriya Krishna
Himabindu Lakkaraju
Sameer Singh
37
74
0
08 Jul 2022
The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and
  Multi-Purpose Corpus of Patent Applications
The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications
Mirac Suzgun
Luke Melas-Kyriazi
Suproteem K. Sarkar
S. Kominers
Stuart M. Shieber
48
27
0
08 Jul 2022
Abs-CAM: A Gradient Optimization Interpretable Approach for Explanation
  of Convolutional Neural Networks
Abs-CAM: A Gradient Optimization Interpretable Approach for Explanation of Convolutional Neural Networks
Chunyan Zeng
Kang Yan
Zhifeng Wang
Yan Yu
Shiyan Xia
Nan Zhao
FAtt
19
39
0
08 Jul 2022
Calibrate to Interpret
Calibrate to Interpret
Gregory Scafarto
N. Posocco
Antoine Bonnefoy
FaML
16
3
0
07 Jul 2022
An Additive Instance-Wise Approach to Multi-class Model Interpretation
An Additive Instance-Wise Approach to Multi-class Model Interpretation
Vy Vo
Van Nguyen
Trung Le
Quan Hung Tran
Gholamreza Haffari
S. Çamtepe
Dinh Q. Phung
FAtt
48
5
0
07 Jul 2022
SESS: Saliency Enhancing with Scaling and Sliding
SESS: Saliency Enhancing with Scaling and Sliding
Osman Tursun
Simon Denman
Sridha Sridharan
Clinton Fookes
11
5
0
05 Jul 2022
Fidelity of Ensemble Aggregation for Saliency Map Explanations using
  Bayesian Optimization Techniques
Fidelity of Ensemble Aggregation for Saliency Map Explanations using Bayesian Optimization Techniques
Yannik Mahlau
Christian Nolde
FAtt
40
0
0
04 Jul 2022
Distilling Ensemble of Explanations for Weakly-Supervised Pre-Training
  of Image Segmentation Models
Distilling Ensemble of Explanations for Weakly-Supervised Pre-Training of Image Segmentation Models
Xuhong Li
Haoyi Xiong
Yi Liu
Dingfu Zhou
Zeyu Chen
Yaqing Wang
Dejing Dou
29
7
0
04 Jul 2022
Interpretable by Design: Learning Predictors by Composing Interpretable
  Queries
Interpretable by Design: Learning Predictors by Composing Interpretable Queries
Aditya Chattopadhyay
Stewart Slocum
B. Haeffele
René Vidal
D. Geman
28
21
0
03 Jul 2022
A systematic review of biologically-informed deep learning models for
  cancer: fundamental trends for encoding and interpreting oncology data
A systematic review of biologically-informed deep learning models for cancer: fundamental trends for encoding and interpreting oncology data
Magdalena Wysocka
Oskar Wysocki
Marie Zufferey
Dónal Landers
André Freitas
AI4CE
50
28
0
02 Jul 2022
On the amplification of security and privacy risks by post-hoc
  explanations in machine learning models
On the amplification of security and privacy risks by post-hoc explanations in machine learning models
Pengrui Quan
Supriyo Chakraborty
J. Jeyakumar
Mani B. Srivastava
MIACV
AAML
8
5
0
28 Jun 2022
BAGEL: A Benchmark for Assessing Graph Neural Network Explanations
BAGEL: A Benchmark for Assessing Graph Neural Network Explanations
Mandeep Rathee
Thorben Funke
Avishek Anand
Megha Khosla
49
15
0
28 Jun 2022
Improving Disease Classification Performance and Explainability of Deep
  Learning Models in Radiology with Heatmap Generators
Improving Disease Classification Performance and Explainability of Deep Learning Models in Radiology with Heatmap Generators
A. Watanabe
Sara Ketabi
Khashayar Namdar
Namdar
Farzad Khalvati
24
8
0
28 Jun 2022
When are Post-hoc Conceptual Explanations Identifiable?
When are Post-hoc Conceptual Explanations Identifiable?
Tobias Leemann
Michael Kirchhof
Yao Rong
Enkelejda Kasneci
Gjergji Kasneci
57
10
0
28 Jun 2022
PARTICUL: Part Identification with Confidence measure using Unsupervised
  Learning
PARTICUL: Part Identification with Confidence measure using Unsupervised Learning
Romain Xu-Darme
Georges Quénot
Zakaria Chihani
M. Rousset
24
7
0
27 Jun 2022
FlowX: Towards Explainable Graph Neural Networks via Message Flows
FlowX: Towards Explainable Graph Neural Networks via Message Flows
Shurui Gui
Hao Yuan
Jie Wang
Qicheng Lao
Kang Li
Shuiwang Ji
43
12
0
26 Jun 2022
Analyzing Explainer Robustness via Probabilistic Lipschitzness of
  Prediction Functions
Analyzing Explainer Robustness via Probabilistic Lipschitzness of Prediction Functions
Zulqarnain Khan
Davin Hill
A. Masoomi
Joshua Bone
Jennifer Dy
AAML
48
3
0
24 Jun 2022
Robustness of Explanation Methods for NLP Models
Robustness of Explanation Methods for NLP Models
Shriya Atmakuri
Tejas Chheda
Dinesh Kandula
Nishant Yadav
Taesung Lee
Hessel Tuinhof
FAtt
AAML
32
4
0
24 Jun 2022
Explanation-based Counterfactual Retraining(XCR): A Calibration Method
  for Black-box Models
Explanation-based Counterfactual Retraining(XCR): A Calibration Method for Black-box Models
Liu Zhendong
Wenyu Jiang
Yan Zhang
Chongjun Wang
CML
16
0
0
22 Jun 2022
OpenXAI: Towards a Transparent Evaluation of Model Explanations
OpenXAI: Towards a Transparent Evaluation of Model Explanations
Chirag Agarwal
Dan Ley
Satyapriya Krishna
Eshika Saxena
Martin Pawelczyk
Nari Johnson
Isha Puri
Marinka Zitnik
Himabindu Lakkaraju
XAI
31
141
0
22 Jun 2022
Visualizing and Understanding Contrastive Learning
Visualizing and Understanding Contrastive Learning
Fawaz Sammani
Boris Joukovsky
Nikos Deligiannis
SSL
FAtt
20
9
0
20 Jun 2022
Neural Activation Patterns (NAPs): Visual Explainability of Learned
  Concepts
Neural Activation Patterns (NAPs): Visual Explainability of Learned Concepts
Alex Bauerle
Daniel Jonsson
Timo Ropinski
FAtt
45
12
0
20 Jun 2022
FD-CAM: Improving Faithfulness and Discriminability of Visual
  Explanation for CNNs
FD-CAM: Improving Faithfulness and Discriminability of Visual Explanation for CNNs
Hui Li
Zihao Li
Rui Ma
Tieru Wu
FAtt
36
8
0
17 Jun 2022
What do navigation agents learn about their environment?
What do navigation agents learn about their environment?
Kshitij Dwivedi
Gemma Roig
Aniruddha Kembhavi
Roozbeh Mottaghi
47
11
0
17 Jun 2022
Adversarial Patch Attacks and Defences in Vision-Based Tasks: A Survey
Adversarial Patch Attacks and Defences in Vision-Based Tasks: A Survey
Abhijith Sharma
Yijun Bian
Phil Munz
Apurva Narayan
VLM
AAML
29
20
0
16 Jun 2022
The Manifold Hypothesis for Gradient-Based Explanations
The Manifold Hypothesis for Gradient-Based Explanations
Sebastian Bordt
Uddeshya Upadhyay
Zeynep Akata
U. V. Luxburg
FAtt
AAML
33
12
0
15 Jun 2022
Self-Supervision on Images and Text Reduces Reliance on Visual Shortcut
  Features
Self-Supervision on Images and Text Reduces Reliance on Visual Shortcut Features
Anil Palepu
Andrew L. Beam
OOD
VLM
29
5
0
14 Jun 2022
On the explainable properties of 1-Lipschitz Neural Networks: An Optimal
  Transport Perspective
On the explainable properties of 1-Lipschitz Neural Networks: An Optimal Transport Perspective
M. Serrurier
Franck Mamalet
Thomas Fel
Louis Bethune
Thibaut Boissin
AAML
FAtt
34
4
0
14 Jun 2022
Previous
123...101112...222324
Next