Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.06466
Cited By
How Interpretable and Trustworthy are GAMs?
11 June 2020
C. Chang
S. Tan
Benjamin J. Lengerich
Anna Goldenberg
R. Caruana
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How Interpretable and Trustworthy are GAMs?"
21 / 21 papers shown
Title
Challenges in interpretability of additive models
Xinyu Zhang
Julien Martinelli
S. T. John
AAML
AI4CE
32
1
0
14 Apr 2025
SurvBeX: An explanation method of the machine learning survival models based on the Beran estimator
Lev V. Utkin
Danila Eremenko
A. Konstantinov
32
4
0
07 Aug 2023
Curve Your Enthusiasm: Concurvity Regularization in Differentiable Generalized Additive Models
Julien N. Siems
Konstantin Ditschuneit
Winfried Ripken
Alma Lindborg
Maximilian Schambach
Johannes Otterbach
Martin Genzel
24
6
0
19 May 2023
GAM Coach: Towards Interactive and User-centered Algorithmic Recourse
Zijie J. Wang
J. W. Vaughan
R. Caruana
Duen Horng Chau
HAI
31
21
0
27 Feb 2023
Interpretability with full complexity by constraining feature information
Kieran A. Murphy
Danielle Bassett
FAtt
35
5
0
30 Nov 2022
Higher-order Neural Additive Models: An Interpretable Machine Learning Model with Feature Interactions
Minkyu Kim
Hyunjin Choi
Jinho Kim
FAtt
38
8
0
30 Sep 2022
TimberTrek: Exploring and Curating Sparse Decision Trees with Interactive Visualization
Zijie J. Wang
Chudi Zhong
Rui Xin
Takuya Takagi
Zhi Chen
Duen Horng Chau
Cynthia Rudin
Margo Seltzer
33
14
0
19 Sep 2022
A Concept and Argumentation based Interpretable Model in High Risk Domains
Haixiao Chi
Dawei Wang
Gaojie Cui
Feng Mao
Beishui Liao
31
1
0
17 Aug 2022
Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values
Zijie J. Wang
Alex Kale
Harsha Nori
P. Stella
M. Nunnally
Duen Horng Chau
Mihaela Vorvoreanu
J. W. Vaughan
R. Caruana
KELM
69
27
0
30 Jun 2022
The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations
Aparna Balagopalan
Haoran Zhang
Kimia Hamidieh
Thomas Hartvigsen
Frank Rudzicz
Marzyeh Ghassemi
40
78
0
06 May 2022
Differentially Private Estimation of Heterogeneous Causal Effects
Fengshi Niu
Harsha Nori
B. Quistorff
R. Caruana
Donald Ngwe
A. Kannan
CML
25
13
0
22 Feb 2022
Topological Representations of Local Explanations
Peter Xenopoulos
G. Chan
Harish Doraiswamy
L. G. Nonato
Brian Barr
Claudio Silva
FAtt
28
4
0
06 Jan 2022
GAM Changer: Editing Generalized Additive Models with Interactive Visualization
Zijie J. Wang
Alex Kale
Harsha Nori
P. Stella
M. Nunnally
Duen Horng Chau
Mihaela Vorvoreanu
Jennifer Wortman Vaughan
R. Caruana
KELM
19
24
0
06 Dec 2021
Accuracy, Interpretability, and Differential Privacy via Explainable Boosting
Harsha Nori
R. Caruana
Zhiqi Bu
J. Shen
Janardhan Kulkarni
33
37
0
17 Jun 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
34
184
0
15 May 2021
Ensembles of Random SHAPs
Lev V. Utkin
A. Konstantinov
FAtt
16
20
0
04 Mar 2021
Interpretable Machine Learning with an Ensemble of Gradient Boosting Machines
A. Konstantinov
Lev V. Utkin
FedML
AI4CE
10
139
0
14 Oct 2020
GAMI-Net: An Explainable Neural Network based on Generalized Additive Models with Structured Interactions
Zebin Yang
Aijun Zhang
Agus Sudjianto
FAtt
19
126
0
16 Mar 2020
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
341
4,230
0
23 Aug 2019
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,696
0
28 Feb 2017
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,091
0
24 Oct 2016
1