ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07874
  4. Cited By
A Unified Approach to Interpreting Model Predictions

A Unified Approach to Interpreting Model Predictions

22 May 2017
Scott M. Lundberg
Su-In Lee
    FAtt
ArXivPDFHTML

Papers citing "A Unified Approach to Interpreting Model Predictions"

22 / 1,822 papers shown
Title
Minerva: A File-Based Ransomware Detector
Minerva: A File-Based Ransomware Detector
Dorjan Hitaj
Giulio Pagnotta
Fabio De Gaspari
Lorenzo De Carli
L. Mancini
AAML
35
9
0
26 Jan 2023
Explaining Quantum Circuits with Shapley Values: Towards Explainable Quantum Machine Learning
Explaining Quantum Circuits with Shapley Values: Towards Explainable Quantum Machine Learning
R. Heese
Thore Gerlach
Sascha Mucke
Sabine Muller
Matthias Jakobs
Nico Piatkowski
33
18
0
22 Jan 2023
Measuring the Driving Forces of Predictive Performance: Application to Credit Scoring
Measuring the Driving Forces of Predictive Performance: Application to Credit Scoring
Hué Sullivan
Hurlin Christophe
Pérignon Christophe
Saurin Sébastien
42
0
0
12 Dec 2022
Psychophysiology-aided Perceptually Fluent Speech Analysis of Children Who Stutter
Psychophysiology-aided Perceptually Fluent Speech Analysis of Children Who Stutter
Yi Xiao
Harshit Sharma
V. Tumanova
Asif Salekin
82
0
0
16 Nov 2022
Care for the Mind Amid Chronic Diseases: An Interpretable AI Approach Using IoT
Care for the Mind Amid Chronic Diseases: An Interpretable AI Approach Using IoT
Jiaheng Xie
Xiaohang Zhao
Xiang Liu
Xiao Fang
OOD
90
2
0
08 Nov 2022
VISTANet: VIsual Spoken Textual Additive Net for Interpretable Multimodal Emotion Recognition
VISTANet: VIsual Spoken Textual Additive Net for Interpretable Multimodal Emotion Recognition
Puneet Kumar
Sarthak Malik
Balasubramanian Raman
Xiaobai Li
81
2
0
24 Aug 2022
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Frederik Pahde
Maximilian Dreyer
Leander Weber
Moritz Weckbecker
Christopher J. Anders
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
87
9
0
07 Feb 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
221
190
0
03 Feb 2022
Uncovering the Dark Side of Telegram: Fakes, Clones, Scams, and Conspiracy Movements
Uncovering the Dark Side of Telegram: Fakes, Clones, Scams, and Conspiracy Movements
Massimo La Morgia
Alessandro Mei
Alberto Maria Mongardini
Jie Wu
28
21
0
26 Nov 2021
Rule Generation for Classification: Scalability, Interpretability, and Fairness
Rule Generation for Classification: Scalability, Interpretability, and Fairness
Tabea E. Rober
Adia C. Lumadjeng
M. Akyuz
cS. .Ilker Birbil
78
2
0
21 Apr 2021
On the Tractability of SHAP Explanations
On the Tractability of SHAP Explanations
Guy Van den Broeck
A. Lykov
Maximilian Schleich
Dan Suciu
FAtt
TDI
45
266
0
18 Sep 2020
SNoRe: Scalable Unsupervised Learning of Symbolic Node Representations
SNoRe: Scalable Unsupervised Learning of Symbolic Node Representations
Sebastian Mežnar
Nada Lavrac
Blaž Škrlj
54
5
0
08 Sep 2020
Explainability in Deep Reinforcement Learning
Explainability in Deep Reinforcement Learning
Alexandre Heuillet
Fabien Couthouis
Natalia Díaz Rodríguez
XAI
91
279
0
15 Aug 2020
Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time
  and Delay
Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time and Delay
Sasha Rubin
Thomas Gerspacher
Martin C. Cooper
Alexey Ignatiev
Nina Narodytska
FAtt
39
61
0
13 Aug 2020
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining
  Neural Networks
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Klaus-Robert Muller
Shinichi Nakajima
Marius Kloft
UQCV
FAtt
53
31
0
16 Jun 2020
Explainable AI for a No-Teardown Vehicle Component Cost Estimation: A
  Top-Down Approach
Explainable AI for a No-Teardown Vehicle Component Cost Estimation: A Top-Down Approach
A. Moawad
E. Islam
Namdoo Kim
R. Vijayagopal
A. Rousseau
Wei Biao Wu
44
5
0
15 Jun 2020
SurvLIME: A method for explaining machine learning survival models
SurvLIME: A method for explaining machine learning survival models
M. Kovalev
Lev V. Utkin
E. Kasimov
161
90
0
18 Mar 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and
  Applications
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
78
82
0
17 Mar 2020
A Supervised Machine Learning Model For Imputing Missing Boarding Stops
  In Smart Card Data
A Supervised Machine Learning Model For Imputing Missing Boarding Stops In Smart Card Data
Nadav Shalit
Michael Fire
Eran Ben-Elia
110
4
0
10 Mar 2020
Two Decades of AI4NETS-AI/ML for Data Networks: Challenges & Research
  Directions
Two Decades of AI4NETS-AI/ML for Data Networks: Challenges & Research Directions
P. Casas
GNN
28
8
0
03 Mar 2020
Learning Important Features Through Propagating Activation Differences
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
82
3,848
0
10 Apr 2017
Not Just a Black Box: Learning Important Features Through Propagating
  Activation Differences
Not Just a Black Box: Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Shcherbina
A. Kundaje
FAtt
40
782
0
05 May 2016
Previous
123...353637