Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1911.00467
Cited By
v1
v2 (latest)
Explaining black box decisions by Shapley cohort refinement
1 November 2019
Masayoshi Mase
Art B. Owen
Benjamin B. Seiler
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Explaining black box decisions by Shapley cohort refinement"
17 / 17 papers shown
Title
Notes on Applicability of Explainable AI Methods to Machine Learning Models Using Features Extracted by Persistent Homology
Naofumi Hama
89
0
0
15 Oct 2023
Fast Approximation of the Shapley Values Based on Order-of-Addition Experimental Designs
Liuqing Yang
Yongdao Zhou
Haoda Fu
Min-Qian Liu
Wei Zheng
44
4
0
16 Sep 2023
Model free variable importance for high dimensional data
Naofumi Hama
Masayoshi Mase
Art B. Owen
85
1
0
15 Nov 2022
RbX: Region-based explanations of prediction models
Ismael Lemhadri
Harrison H. Li
Trevor Hastie
37
2
0
17 Oct 2022
Adaptive Bias Correction for Improved Subseasonal Forecasting
Soukayna Mouatadid
Paulo Orenstein
Genevieve Flaspohler
J. Cohen
Miruna Oprescu
E. Fraenkel
Lester W. Mackey
AI4Cl
143
33
0
21 Sep 2022
Algorithms to estimate Shapley value feature attributions
Hugh Chen
Ian Covert
Scott M. Lundberg
Su-In Lee
TDI
FAtt
98
240
0
15 Jul 2022
Shapley Computations Using Surrogate Model-Based Trees
Zhipu Zhou
Jie Chen
Linwei Hu
44
0
0
11 Jul 2022
Shapley-NAS: Discovering Operation Contribution for Neural Architecture Search
Han Xiao
Ziwei Wang
Zhengbiao Zhu
Jie Zhou
Jiwen Lu
TDI
57
41
0
20 Jun 2022
Confounder Analysis in Measuring Representation in Product Funnels
Jilei Yang
Wentao Su
CML
50
0
0
07 Jun 2022
Deletion and Insertion Tests in Regression Models
Naofumi Hama
Masayoshi Mase
Art B. Owen
99
8
0
25 May 2022
Decorrelated Variable Importance
I. Verdinelli
Larry A. Wasserman
FAtt
64
20
0
21 Nov 2021
What makes you unique?
Benjamin B. Seiler
Masayoshi Mase
Art B. Owen
29
0
0
17 May 2021
Cohort Shapley value for algorithmic fairness
Masayoshi Mase
Art B. Owen
Benjamin B. Seiler
112
14
0
15 May 2021
Explaining a Series of Models by Propagating Shapley Values
Hugh Chen
Scott M. Lundberg
Su-In Lee
TDI
FAtt
103
139
0
30 Apr 2021
Explaining by Removing: A Unified Framework for Model Explanation
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
151
252
0
21 Nov 2020
True to the Model or True to the Data?
Hugh Chen
Joseph D. Janizek
Scott M. Lundberg
Su-In Lee
TDI
FAtt
180
168
0
29 Jun 2020
Neuron Shapley: Discovering the Responsible Neurons
Amirata Ghorbani
James Zou
FAtt
TDI
63
114
0
23 Feb 2020
1