ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.06358
  4. Cited By
Asymmetric Shapley values: incorporating causal knowledge into
  model-agnostic explainability

Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability

14 October 2019
Christopher Frye
C. Rowat
Ilya Feige
ArXivPDFHTML

Papers citing "Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability"

41 / 41 papers shown
Title
A New Approach to Backtracking Counterfactual Explanations: A Unified Causal Framework for Efficient Model Interpretability
A New Approach to Backtracking Counterfactual Explanations: A Unified Causal Framework for Efficient Model Interpretability
Pouria Fatemi
Ehsan Sharifian
Mohammad Hossein Yassaee
43
0
0
05 May 2025
From Abstract to Actionable: Pairwise Shapley Values for Explainable AI
From Abstract to Actionable: Pairwise Shapley Values for Explainable AI
Jiaxin Xu
Hung Chau
Angela Burden
TDI
57
0
0
18 Feb 2025
AI Data Readiness Inspector (AIDRIN) for Quantitative Assessment of Data Readiness for AI
AI Data Readiness Inspector (AIDRIN) for Quantitative Assessment of Data Readiness for AI
Kaveen Hiniduma
Suren Byna
J. L. Bez
Ravi Madduri
58
5
0
27 Jun 2024
Partial Information Decomposition for Data Interpretability and Feature
  Selection
Partial Information Decomposition for Data Interpretability and Feature Selection
Charles Westphal
Stephen Hailes
Mirco Musolesi
40
0
0
29 May 2024
REFRESH: Responsible and Efficient Feature Reselection Guided by SHAP
  Values
REFRESH: Responsible and Efficient Feature Reselection Guided by SHAP Values
Shubham Sharma
Sanghamitra Dutta
Emanuele Albini
Freddy Lecue
Daniele Magazzeni
Manuela Veloso
45
1
0
13 Mar 2024
Explaining Probabilistic Models with Distributional Values
Explaining Probabilistic Models with Distributional Values
Luca Franceschi
Michele Donini
Cédric Archambeau
Matthias Seeger
FAtt
39
2
0
15 Feb 2024
Succinct Interaction-Aware Explanations
Succinct Interaction-Aware Explanations
Sascha Xu
Joscha Cuppers
Jilles Vreeken
FAtt
29
0
0
08 Feb 2024
Information-Theoretic State Variable Selection for Reinforcement
  Learning
Information-Theoretic State Variable Selection for Reinforcement Learning
Charles Westphal
Stephen Hailes
Mirco Musolesi
26
3
0
21 Jan 2024
Theoretical Evaluation of Asymmetric Shapley Values for Root-Cause
  Analysis
Theoretical Evaluation of Asymmetric Shapley Values for Root-Cause Analysis
Domokos M. Kelen
Mihaly Petreczky
Péter Kersch
András A. Benczúr
FAtt
42
3
0
15 Oct 2023
Towards Faithful Neural Network Intrinsic Interpretation with Shapley
  Additive Self-Attribution
Towards Faithful Neural Network Intrinsic Interpretation with Shapley Additive Self-Attribution
Ying Sun
Hengshu Zhu
Huixia Xiong
TDI
FAtt
MILM
28
1
0
27 Sep 2023
Beyond Single-Feature Importance with ICECREAM
Beyond Single-Feature Importance with ICECREAM
M.-J. Oesterle
Patrick Blobaum
Atalanti A. Mastakouri
Elke Kirschbaum
CML
45
1
0
19 Jul 2023
Shapley Sets: Feature Attribution via Recursive Function Decomposition
Shapley Sets: Feature Attribution via Recursive Function Decomposition
Torty Sivill
Peter A. Flach
FAtt
TDI
11
1
0
04 Jul 2023
PWSHAP: A Path-Wise Explanation Model for Targeted Variables
PWSHAP: A Path-Wise Explanation Model for Targeted Variables
Lucile Ter-Minassian
Oscar Clivio
Karla Diaz-Ordaz
R. Evans
Chris Holmes
31
1
0
26 Jun 2023
Explaining the Uncertain: Stochastic Shapley Values for Gaussian Process
  Models
Explaining the Uncertain: Stochastic Shapley Values for Gaussian Process Models
Siu Lun Chau
Krikamol Muandet
Dino Sejdinovic
FAtt
58
11
0
24 May 2023
Shapley Chains: Extending Shapley Values to Classifier Chains
Shapley Chains: Extending Shapley Values to Classifier Chains
CE Ayad
Thomas Bonnier
Benjamin Bosch
Jesse Read
FAtt
TDI
18
2
0
30 Mar 2023
Improvement-Focused Causal Recourse (ICR)
Improvement-Focused Causal Recourse (ICR)
Gunnar Konig
Timo Freiesleben
Moritz Grosse-Wentrup
CML
39
15
0
27 Oct 2022
Explanation Shift: Detecting distribution shifts on tabular data via the
  explanation space
Explanation Shift: Detecting distribution shifts on tabular data via the explanation space
Carlos Mougan
Klaus Broelemann
Gjergji Kasneci
T. Tiropanis
Steffen Staab
FAtt
34
7
0
22 Oct 2022
Statistical Aspects of SHAP: Functional ANOVA for Model Interpretation
Statistical Aspects of SHAP: Functional ANOVA for Model Interpretation
Andrew Herren
P. R. Hahn
FAtt
27
9
0
21 Aug 2022
Unifying local and global model explanations by functional decomposition
  of low dimensional structures
Unifying local and global model explanations by functional decomposition of low dimensional structures
M. Hiabu
Josephine T. Meyer
Marvin N. Wright
FAtt
32
20
0
12 Aug 2022
The Shapley Value in Machine Learning
The Shapley Value in Machine Learning
Benedek Rozemberczki
Lauren Watson
Péter Bayer
Hao-Tsung Yang
Oliver Kiss
Sebastian Nilsson
Rik Sarkar
TDI
FAtt
29
205
0
11 Feb 2022
Explainability in Music Recommender Systems
Explainability in Music Recommender Systems
Darius Afchar
Alessandro B. Melchiorre
Markus Schedl
Romain Hennequin
Elena V. Epure
Manuel Moussallam
34
48
0
25 Jan 2022
AcME -- Accelerated Model-agnostic Explanations: Fast Whitening of the
  Machine-Learning Black Box
AcME -- Accelerated Model-agnostic Explanations: Fast Whitening of the Machine-Learning Black Box
David Dandolo
Chiara Masiero
Mattia Carletti
Davide Dalle Pezze
Gian Antonio Susto
FAtt
LRM
24
23
0
23 Dec 2021
Using Shapley Values and Variational Autoencoders to Explain Predictive
  Models with Dependent Mixed Features
Using Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Features
Lars Henry Berge Olsen
I. Glad
Martin Jullum
K. Aas
TDI
FAtt
32
17
0
26 Nov 2021
Causal versus Marginal Shapley Values for Robotic Lever Manipulation
  Controlled using Deep Reinforcement Learning
Causal versus Marginal Shapley Values for Robotic Lever Manipulation Controlled using Deep Reinforcement Learning
Sindre Benjamin Remman
Inga Strümke
A. Lekkas
CML
17
7
0
04 Nov 2021
Model Explanations via the Axiomatic Causal Lens
Gagan Biradar
Vignesh Viswanathan
Yair Zick
XAI
CML
25
1
0
08 Sep 2021
Explaining Algorithmic Fairness Through Fairness-Aware Causal Path
  Decomposition
Explaining Algorithmic Fairness Through Fairness-Aware Causal Path Decomposition
Weishen Pan
Sen Cui
Jiang Bian
Changshui Zhang
Fei Wang
CML
FaML
27
33
0
11 Aug 2021
On Locality of Local Explanation Models
On Locality of Local Explanation Models
Sahra Ghalebikesabi
Lucile Ter-Minassian
Karla Diaz-Ordaz
Chris Holmes
FedML
FAtt
28
39
0
24 Jun 2021
Rational Shapley Values
Rational Shapley Values
David S. Watson
23
20
0
18 Jun 2021
Shapley Counterfactual Credits for Multi-Agent Reinforcement Learning
Shapley Counterfactual Credits for Multi-Agent Reinforcement Learning
Jiahui Li
Kun Kuang
Baoxiang Wang
Furui Liu
Long Chen
Fei Wu
Jun Xiao
OffRL
25
60
0
01 Jun 2021
SHAFF: Fast and consistent SHApley eFfect estimates via random Forests
SHAFF: Fast and consistent SHApley eFfect estimates via random Forests
Clément Bénard
Gérard Biau
Sébastien Da Veiga
Erwan Scornet
FAtt
38
32
0
25 May 2021
Explaining Black-Box Algorithms Using Probabilistic Contrastive
  Counterfactuals
Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals
Sainyam Galhotra
Romila Pradhan
Babak Salimi
CML
30
105
0
22 Mar 2021
Shapley values for feature selection: The good, the bad, and the axioms
Shapley values for feature selection: The good, the bad, and the axioms
D. Fryer
Inga Strümke
Hien Nguyen
FAtt
TDI
6
190
0
22 Feb 2021
The Shapley Value of Classifiers in Ensemble Games
The Shapley Value of Classifiers in Ensemble Games
Benedek Rozemberczki
Rik Sarkar
FAtt
FedML
TDI
59
33
0
06 Jan 2021
Why model why? Assessing the strengths and limitations of LIME
Why model why? Assessing the strengths and limitations of LIME
Jurgen Dieber
S. Kirrane
FAtt
26
97
0
30 Nov 2020
Explaining by Removing: A Unified Framework for Model Explanation
Explaining by Removing: A Unified Framework for Model Explanation
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
53
243
0
21 Nov 2020
Feature Removal Is a Unifying Principle for Model Explanation Methods
Feature Removal Is a Unifying Principle for Model Explanation Methods
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
33
33
0
06 Nov 2020
Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual
  Predictions of Complex Models
Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models
Tom Heskes
E. Sijben
I. G. Bucur
Tom Claassen
FAtt
TDI
16
151
0
03 Nov 2020
Shapley Flow: A Graph-based Approach to Interpreting Model Predictions
Shapley Flow: A Graph-based Approach to Interpreting Model Predictions
Jiaxuan Wang
Jenna Wiens
Scott M. Lundberg
FAtt
25
88
0
27 Oct 2020
Generative causal explanations of black-box classifiers
Generative causal explanations of black-box classifiers
Matthew R. O’Shaughnessy
Gregory H. Canal
Marissa Connor
Mark A. Davenport
Christopher Rozell
CML
35
73
0
24 Jun 2020
Explaining individual predictions when features are dependent: More
  accurate approximations to Shapley values
Explaining individual predictions when features are dependent: More accurate approximations to Shapley values
K. Aas
Martin Jullum
Anders Løland
FAtt
TDI
26
606
0
25 Mar 2019
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,092
0
24 Oct 2016
1