ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07874
  4. Cited By
A Unified Approach to Interpreting Model Predictions

A Unified Approach to Interpreting Model Predictions

22 May 2017
Scott M. Lundberg
Su-In Lee
    FAtt
ArXivPDFHTML

Papers citing "A Unified Approach to Interpreting Model Predictions"

50 / 1,822 papers shown
Title
DISCOVER: Making Vision Networks Interpretable via Competition and
  Dissection
DISCOVER: Making Vision Networks Interpretable via Competition and Dissection
Konstantinos P. Panousis
S. Chatzis
62
5
0
07 Oct 2023
Cell Tracking-by-detection using Elliptical Bounding Boxes
Cell Tracking-by-detection using Elliptical Bounding Boxes
Lucas N. Kirsten
Cláudio R. Jung
23
1
0
07 Oct 2023
LIPEx-Locally Interpretable Probabilistic Explanations-To Look Beyond
  The True Class
LIPEx-Locally Interpretable Probabilistic Explanations-To Look Beyond The True Class
Hongbo Zhu
Angelo Cangelosi
Procheta Sen
Anirbit Mukherjee
FAtt
55
0
0
07 Oct 2023
A New Baseline Assumption of Integated Gradients Based on Shaply value
A New Baseline Assumption of Integated Gradients Based on Shaply value
Shuyang Liu
Zixuan Chen
Ge Shi
Ji Wang
Changjie Fan
Yu Xiong
Runze Wu Yujing Hu
Ze Ji
Yang Gao
30
3
0
07 Oct 2023
Measuring Information in Text Explanations
Measuring Information in Text Explanations
Zining Zhu
Frank Rudzicz
FAtt
49
0
0
06 Oct 2023
Multi-decadal Sea Level Prediction using Neural Networks and Spectral
  Clustering on Climate Model Large Ensembles and Satellite Altimeter Data
Multi-decadal Sea Level Prediction using Neural Networks and Spectral Clustering on Climate Model Large Ensembles and Satellite Altimeter Data
S. Sinha
J. Fasullo
R. S. Nerem
C. Monteleoni
AI4Cl
24
0
0
06 Oct 2023
SPADE: Sparsity-Guided Debugging for Deep Neural Networks
SPADE: Sparsity-Guided Debugging for Deep Neural Networks
Arshia Soltani Moakhar
Eugenia Iofinova
Elias Frantar
Dan Alistarh
73
2
0
06 Oct 2023
Fair Feature Importance Scores for Interpreting Tree-Based Methods and
  Surrogates
Fair Feature Importance Scores for Interpreting Tree-Based Methods and Surrogates
Camille Olivia Little
Debolina Halder Lina
Genevera I. Allen
81
1
0
06 Oct 2023
Introducing the Attribution Stability Indicator: a Measure for Time
  Series XAI Attributions
Introducing the Attribution Stability Indicator: a Measure for Time Series XAI Attributions
U. Schlegel
Daniel A. Keim
AI4TS
63
1
0
06 Oct 2023
DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers
DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers
Anna Langedijk
Hosein Mohebbi
Gabriele Sarti
Willem H. Zuidema
Jaap Jumelet
51
12
0
05 Oct 2023
Adversarial Machine Learning for Social Good: Reframing the Adversary as
  an Ally
Adversarial Machine Learning for Social Good: Reframing the Adversary as an Ally
Shawqi Al-Maliki
Adnan Qayyum
Hassan Ali
M. Abdallah
Junaid Qadir
D. Hoang
Dusit Niyato
Ala I. Al-Fuqaha
AAML
81
3
0
05 Oct 2023
Redefining Digital Health Interfaces with Large Language Models
Redefining Digital Health Interfaces with Large Language Models
F. Imrie
Paulius Rauba
M. Schaar
AI4MH
LM&MA
44
3
0
05 Oct 2023
The Blame Problem in Evaluating Local Explanations, and How to Tackle it
The Blame Problem in Evaluating Local Explanations, and How to Tackle it
Amir Hossein Akhavan Rahnama
ELM
FAtt
62
4
0
05 Oct 2023
A Survey of GPT-3 Family Large Language Models Including ChatGPT and
  GPT-4
A Survey of GPT-3 Family Large Language Models Including ChatGPT and GPT-4
Katikapalli Subramanyam Kalyan
LM&MA
AI4CE
LRM
AILaw
ELM
67
231
0
04 Oct 2023
Hate Speech Detection in Limited Data Contexts using Synthetic Data
  Generation
Hate Speech Detection in Limited Data Contexts using Synthetic Data Generation
Aman Khullar
Daniel K. Nkemelu
Cuong V. Nguyen
Michael L. Best
53
2
0
04 Oct 2023
Improving Knowledge Distillation with Teacher's Explanation
Improving Knowledge Distillation with Teacher's Explanation
S. Chowdhury
Ben Liang
A. Tizghadam
Ilijc Albanese
FAtt
21
0
0
04 Oct 2023
Auto-FP: An Experimental Study of Automated Feature Preprocessing for
  Tabular Data
Auto-FP: An Experimental Study of Automated Feature Preprocessing for Tabular Data
Danrui Qi
Jinglin Peng
Yongjun He
Jiannan Wang
TPM
58
3
0
04 Oct 2023
Towards Feasible Counterfactual Explanations: A Taxonomy Guided
  Template-based NLG Method
Towards Feasible Counterfactual Explanations: A Taxonomy Guided Template-based NLG Method
Pedram Salimi
Nirmalie Wiratunga
D. Corsar
A. Wijekoon
42
1
0
03 Oct 2023
Driving with LLMs: Fusing Object-Level Vector Modality for Explainable
  Autonomous Driving
Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving
Long Chen
Oleg Sinavski
Jan Hünermann
Alice Karnsund
Andrew James Willmott
Danny Birch
Daniel Maund
Jamie Shotton
MLLM
52
193
0
03 Oct 2023
CausalTime: Realistically Generated Time-series for Benchmarking of
  Causal Discovery
CausalTime: Realistically Generated Time-series for Benchmarking of Causal Discovery
Yuxiao Cheng
Ziqian Wang
Tingxiong Xiao
Qin Zhong
J. Suo
Kunlun He
AI4TS
CML
43
12
0
03 Oct 2023
Deciphering Diagnoses: How Large Language Models Explanations Influence
  Clinical Decision Making
Deciphering Diagnoses: How Large Language Models Explanations Influence Clinical Decision Making
D. Umerenkov
Galina Zubkova
A. Nesterov
ELM
47
3
0
03 Oct 2023
AI-based association analysis for medical imaging using latent-space geometric confounder correction
AI-based association analysis for medical imaging using latent-space geometric confounder correction
Xianjing Liu
Yue Liu
Meike W. Vernooij
E. Wolvius
Gennady V. Roshchupkin
Esther E. Bron
MedIm
62
0
0
03 Oct 2023
A Framework for Interpretability in Machine Learning for Medical Imaging
A Framework for Interpretability in Machine Learning for Medical Imaging
Alan Q. Wang
Batuhan K. Karaman
Heejong Kim
Jacob Rosenthal
Rachit Saluja
Sean I. Young
M. Sabuncu
AI4CE
95
12
0
02 Oct 2023
Designing User-Centric Behavioral Interventions to Prevent Dysglycemia
  with Novel Counterfactual Explanations
Designing User-Centric Behavioral Interventions to Prevent Dysglycemia with Novel Counterfactual Explanations
Asiful Arefeen
Hassan Ghasemzadeh
43
3
0
02 Oct 2023
Defending Against Authorship Identification Attacks
Defending Against Authorship Identification Attacks
Haining Wang
38
1
0
02 Oct 2023
Co-audit: tools to help humans double-check AI-generated content
Co-audit: tools to help humans double-check AI-generated content
Andrew D. Gordon
Carina Negreanu
J. Cambronero
Rasika Chakravarthy
Ian Drosos
...
Hannah Richardson
Advait Sarkar
Stephanie Simmons
Jack Williams
Ben Zorn
65
13
0
02 Oct 2023
DINE: Dimensional Interpretability of Node Embeddings
DINE: Dimensional Interpretability of Node Embeddings
Simone Piaggesi
Megha Khosla
Andre' Panisson
Avishek Anand
51
6
0
02 Oct 2023
Faithful Explanations of Black-box NLP Models Using LLM-generated
  Counterfactuals
Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals
Y. Gat
Nitay Calderon
Amir Feder
Alexander Chapanin
Amit Sharma
Roi Reichart
64
30
0
01 Oct 2023
LaPLACE: Probabilistic Local Model-Agnostic Causal Explanations
LaPLACE: Probabilistic Local Model-Agnostic Causal Explanations
Sein Minn
XAI
FAtt
CML
16
1
0
01 Oct 2023
Black-box Attacks on Image Activity Prediction and its Natural Language
  Explanations
Black-box Attacks on Image Activity Prediction and its Natural Language Explanations
Alina Elena Baia
Valentina Poggioni
Andrea Cavallaro
AAML
36
1
0
30 Sep 2023
A PSO Based Method to Generate Actionable Counterfactuals for High
  Dimensional Data
A PSO Based Method to Generate Actionable Counterfactuals for High Dimensional Data
Shashank Shekhar
Asif Salim
Adesh Bansode
Vivaswan Jinturkar
Anirudha Nayak
33
0
0
30 Sep 2023
Refutation of Shapley Values for XAI -- Additional Evidence
Refutation of Shapley Values for XAI -- Additional Evidence
Xuanxiang Huang
Sasha Rubin
AAML
49
4
0
30 Sep 2023
Adversarial Explainability: Utilizing Explainable Machine Learning in
  Bypassing IoT Botnet Detection Systems
Adversarial Explainability: Utilizing Explainable Machine Learning in Bypassing IoT Botnet Detection Systems
M. Alani
Atefeh Mashatan
Ali Miri
AAML
21
1
0
29 Sep 2023
Age Group Discrimination via Free Handwriting Indicators
Age Group Discrimination via Free Handwriting Indicators
Eugenio Lomurno
Simone Toffoli
D. D. Febbo
Matteo Matteucci
Francesca Lunardini
Simona Ferrante
49
1
0
29 Sep 2023
Prototype Generation: Robust Feature Visualisation for Data Independent
  Interpretability
Prototype Generation: Robust Feature Visualisation for Data Independent Interpretability
Ziyin Li
Bao Feng
41
1
0
29 Sep 2023
Dynamic Interpretability for Model Comparison via Decision Rules
Dynamic Interpretability for Model Comparison via Decision Rules
Adam Rida
Marie-Jeanne Lesot
Junsheng Wang
Liyan Zhang
33
0
0
29 Sep 2023
Tell Me a Story! Narrative-Driven XAI with Large Language Models
Tell Me a Story! Narrative-Driven XAI with Large Language Models
David Martens
James Hinns
Camille Dams
Mark Vergouwen
Theodoros Evgeniou
40
4
0
29 Sep 2023
Reliability Quantification of Deep Reinforcement Learning-based Control
Reliability Quantification of Deep Reinforcement Learning-based Control
Hitoshi Yoshioka
Hirotada Hashimoto
44
0
0
29 Sep 2023
Discrete-Choice Model with Generalized Additive Utility Network
Discrete-Choice Model with Generalized Additive Utility Network
Tomoki Nishi
Yusuke Hara
96
0
0
29 Sep 2023
Axiomatic Aggregations of Abductive Explanations
Axiomatic Aggregations of Abductive Explanations
Gagan Biradar
Yacine Izza
Elita Lobo
Vignesh Viswanathan
Yair Zick
FAtt
52
6
0
29 Sep 2023
Beyond Tides and Time: Machine Learning Triumph in Water Quality
Beyond Tides and Time: Machine Learning Triumph in Water Quality
Yinpu Li
Siqi Mao
Yaping Yuan
Ziren Wang
Yixin Kang
Yuanxin Yao
21
0
0
29 Sep 2023
ONNXExplainer: an ONNX Based Generic Framework to Explain Neural
  Networks Using Shapley Values
ONNXExplainer: an ONNX Based Generic Framework to Explain Neural Networks Using Shapley Values
Yong Zhao
Runxin He
Nicholas Kersting
Can Liu
Shubham Agrawal
Chiranjeet Chetia
Yu Gu
FAtt
TDI
89
0
0
29 Sep 2023
On Generating Explanations for Reinforcement Learning Policies: An Empirical Study
On Generating Explanations for Reinforcement Learning Policies: An Empirical Study
Mikihisa Yuasa
Huy T. Tran
R. Sreenivas
FAtt
LRM
87
1
0
29 Sep 2023
Granularity at Scale: Estimating Neighborhood Socioeconomic Indicators
  from High-Resolution Orthographic Imagery and Hybrid Learning
Granularity at Scale: Estimating Neighborhood Socioeconomic Indicators from High-Resolution Orthographic Imagery and Hybrid Learning
Ethan Brewer
Giovani Valdrighi
Antonio Longa
Joao Rulff
Andrea Passerini
Zhonghui Lv
Manfred Jaeger
Claudio Silva
27
0
0
28 Sep 2023
Towards Faithful Neural Network Intrinsic Interpretation with Shapley
  Additive Self-Attribution
Towards Faithful Neural Network Intrinsic Interpretation with Shapley Additive Self-Attribution
Ying Sun
Hengshu Zhu
Huixia Xiong
TDI
FAtt
MILM
69
1
0
27 Sep 2023
Neural Stochastic Differential Equations for Robust and Explainable
  Analysis of Electromagnetic Unintended Radiated Emissions
Neural Stochastic Differential Equations for Robust and Explainable Analysis of Electromagnetic Unintended Radiated Emissions
Sumit Kumar Jha
Susmit Jha
Rickard Ewetz
Alvaro Velasquez
40
2
0
27 Sep 2023
DeepROCK: Error-controlled interaction detection in deep neural networks
DeepROCK: Error-controlled interaction detection in deep neural networks
Winston Chen
William Stafford Noble
Y. Lu
45
1
0
26 Sep 2023
Explaining Deep Face Algorithms through Visualization: A Survey
Explaining Deep Face Algorithms through Visualization: A Survey
Thrupthi Ann
S. M. I. C. V. Balasubramanian
M. Jawahar
CVBM
42
1
0
26 Sep 2023
Linked shrinkage to improve estimation of interaction effects in
  regression models
Linked shrinkage to improve estimation of interaction effects in regression models
Mark van de Wiel
Matteo Amestoy
J. Hoogland
13
1
0
25 Sep 2023
May I Ask a Follow-up Question? Understanding the Benefits of
  Conversations in Neural Network Explainability
May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network Explainability
Tong Zhang
Xiaoyu Yang
Boyang Albert Li
51
4
0
25 Sep 2023
Previous
123...34353637
Next