ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.05179
  4. Cited By
Unbiased Measurement of Feature Importance in Tree-Based Methods

Unbiased Measurement of Feature Importance in Tree-Based Methods

12 March 2019
Zhengze Zhou
Giles Hooker
ArXivPDFHTML

Papers citing "Unbiased Measurement of Feature Importance in Tree-Based Methods"

31 / 31 papers shown
Title
GlyTwin: Digital Twin for Glucose Control in Type 1 Diabetes Through Optimal Behavioral Modifications Using Patient-Centric Counterfactuals
GlyTwin: Digital Twin for Glucose Control in Type 1 Diabetes Through Optimal Behavioral Modifications Using Patient-Centric Counterfactuals
Asiful Arefeen
Saman Khamesian
Maria Adela Grando
Bithika Thompson
Hassan Ghasemzadeh
41
0
0
14 Apr 2025
Multi forests: Variable importance for multi-class outcomes
Multi forests: Variable importance for multi-class outcomes
Roman Hornung
Alexander Hapfelmeier
31
1
0
13 Sep 2024
Predicting the duration of traffic incidents for Sydney greater
  metropolitan area using machine learning methods
Predicting the duration of traffic incidents for Sydney greater metropolitan area using machine learning methods
Artur Grigorev
S. Shafiei
Hanna Grzybowska
Adriana-Simona Mihaita
30
2
0
27 Jun 2024
Evaluating Explanatory Capabilities of Machine Learning Models in
  Medical Diagnostics: A Human-in-the-Loop Approach
Evaluating Explanatory Capabilities of Machine Learning Models in Medical Diagnostics: A Human-in-the-Loop Approach
José Bobes-Bascarán
E. Mosqueira-Rey
Á. Fernández-Leal
Elena Hernández-Pereira
David Alonso-Ríos
V. Moret-Bonillo
Israel Figueirido-Arnoso
Y. Vidal-Ínsua
ELM
27
0
0
28 Mar 2024
McUDI: Model-Centric Unsupervised Degradation Indicator for Failure
  Prediction AIOps Solutions
McUDI: Model-Centric Unsupervised Degradation Indicator for Failure Prediction AIOps Solutions
Lorena Poenaru-Olaru
Luís Cruz
Jan S. Rellermeyer
A. V. Deursen
25
0
0
25 Jan 2024
End-to-end Feature Selection Approach for Learning Skinny Trees
End-to-end Feature Selection Approach for Learning Skinny Trees
Shibal Ibrahim
Kayhan Behdin
Rahul Mazumder
30
0
0
28 Oct 2023
Unbiased Gradient Boosting Decision Tree with Unbiased Feature
  Importance
Unbiased Gradient Boosting Decision Tree with Unbiased Feature Importance
Zheyu Zhang
Tianze Zhang
Jun Yu Li
23
5
0
18 May 2023
Interpreting Deep Forest through Feature Contribution and MDI Feature
  Importance
Interpreting Deep Forest through Feature Contribution and MDI Feature Importance
Yi He
Shen-Huan Lyu
Yuan Jiang
FAtt
27
4
0
01 May 2023
CIMLA: Interpretable AI for inference of differential causal networks
CIMLA: Interpretable AI for inference of differential causal networks
Payam Dibaeinia
S. Sinha
CML
34
0
0
25 Apr 2023
The Berkelmans-Pries Feature Importance Method: A Generic Measure of
  Informativeness of Features
The Berkelmans-Pries Feature Importance Method: A Generic Measure of Informativeness of Features
Joris Pries
Guus Berkelmans
Sandjai Bhulai
R. V. D. Mei
FAtt
14
0
0
11 Jan 2023
Individualized and Global Feature Attributions for Gradient Boosted
  Trees in the Presence of $\ell_2$ Regularization
Individualized and Global Feature Attributions for Gradient Boosted Trees in the Presence of ℓ2\ell_2ℓ2​ Regularization
Qingyao Sun
34
2
0
08 Nov 2022
ControlBurn: Nonlinear Feature Selection with Sparse Tree Ensembles
ControlBurn: Nonlinear Feature Selection with Sparse Tree Ensembles
Brian Liu
Miao Xie
Haoyue Yang
Madeleine Udell
11
1
0
08 Jul 2022
A Novel Splitting Criterion Inspired by Geometric Mean Metric Learning
  for Decision Tree
A Novel Splitting Criterion Inspired by Geometric Mean Metric Learning for Decision Tree
Dan Li
Songcan Chen
12
1
0
23 Apr 2022
Fast Interpretable Greedy-Tree Sums
Fast Interpretable Greedy-Tree Sums
Yan Shuo Tan
Chandan Singh
Keyan Nasseri
Abhineet Agarwal
James Duncan
Omer Ronen
M. Epland
Aaron E. Kornblith
Bin-Xia Yu
AI4CE
29
6
0
28 Jan 2022
ControlBurn: Feature Selection by Sparse Forests
ControlBurn: Feature Selection by Sparse Forests
Brian Liu
Miao Xie
Madeleine Udell
27
11
0
01 Jul 2021
S-LIME: Stabilized-LIME for Model Explanation
S-LIME: Stabilized-LIME for Model Explanation
Zhengze Zhou
Giles Hooker
Fei Wang
FAtt
27
86
0
15 Jun 2021
A Subspace-based Approach for Dimensionality Reduction and Important
  Variable Selection
A Subspace-based Approach for Dimensionality Reduction and Important Variable Selection
Didi Bo
Hoon Hwangbo
Vinit Sharma
C. Arndt
S. TerMaath
6
3
0
03 Jun 2021
Machine learning for detection of stenoses and aneurysms: application in
  a physiologically realistic virtual patient database
Machine learning for detection of stenoses and aneurysms: application in a physiologically realistic virtual patient database
G. Jones
Jim Parr
P. Nithiarasu
S. Pant
16
21
0
28 Feb 2021
MDA for random forests: inconsistency, and a practical solution via the
  Sobol-MDA
MDA for random forests: inconsistency, and a practical solution via the Sobol-MDA
Clément Bénard
Sébastien Da Veiga
Erwan Scornet
47
49
0
26 Feb 2021
Feature Importance Explanations for Temporal Black-Box Models
Feature Importance Explanations for Temporal Black-Box Models
Akshay Sood
M. Craven
FAtt
OOD
25
15
0
23 Feb 2021
Provable Boolean Interaction Recovery from Tree Ensemble obtained via
  Random Forests
Provable Boolean Interaction Recovery from Tree Ensemble obtained via Random Forests
Merle Behr
Yu Wang
Xiao Li
Bin-Xia Yu
24
13
0
23 Feb 2021
Bridging Breiman's Brook: From Algorithmic Modeling to Statistical
  Learning
Bridging Breiman's Brook: From Algorithmic Modeling to Statistical Learning
L. Mentch
Giles Hooker
18
9
0
23 Feb 2021
Modeling Household Online Shopping Demand in the U.S.: A Machine
  Learning Approach and Comparative Investigation between 2009 and 2017
Modeling Household Online Shopping Demand in the U.S.: A Machine Learning Approach and Comparative Investigation between 2009 and 2017
Limon Barua
Bo Zou
Yan
Yan Zhou
Yulin Liu
20
9
0
11 Jan 2021
How Interpretable and Trustworthy are GAMs?
How Interpretable and Trustworthy are GAMs?
C. Chang
S. Tan
Benjamin J. Lengerich
Anna Goldenberg
R. Caruana
FAtt
16
77
0
11 Jun 2020
Nonparametric Feature Impact and Importance
Nonparametric Feature Impact and Importance
T. Parr
James D. Wilson
J. Hamrick
FAtt
19
31
0
08 Jun 2020
From unbiased MDI Feature Importance to Explainable AI for Trees
From unbiased MDI Feature Importance to Explainable AI for Trees
Markus Loecher
FAtt
16
5
0
26 Mar 2020
Unbiased variable importance for random forests
Unbiased variable importance for random forests
Markus Loecher
FAtt
55
53
0
04 Mar 2020
Trees, forests, and impurity-based variable importance
Trees, forests, and impurity-based variable importance
Erwan Scornet
FAtt
39
75
0
13 Jan 2020
A Debiased MDI Feature Importance Measure for Random Forests
A Debiased MDI Feature Importance Measure for Random Forests
Xiao Li
Yu Wang
Sumanta Basu
Karl Kumbier
Bin Yu
18
83
0
26 Jun 2019
Unrestricted Permutation forces Extrapolation: Variable Importance
  Requires at least One More Model, or There Is No Free Variable Importance
Unrestricted Permutation forces Extrapolation: Variable Importance Requires at least One More Model, or There Is No Free Variable Importance
Giles Hooker
L. Mentch
Siyu Zhou
37
153
0
01 May 2019
Boosting Random Forests to Reduce Bias; One-Step Boosted Forest and its
  Variance Estimate
Boosting Random Forests to Reduce Bias; One-Step Boosted Forest and its Variance Estimate
Indrayudh Ghosal
Giles Hooker
22
43
0
21 Mar 2018
1