ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.03867
  4. Cited By
Quantifying Model Complexity via Functional Decomposition for Better
  Post-Hoc Interpretability
v1v2 (latest)

Quantifying Model Complexity via Functional Decomposition for Better Post-Hoc Interpretability

8 April 2019
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
    FAtt
ArXiv (abs)PDFHTML

Papers citing "Quantifying Model Complexity via Functional Decomposition for Better Post-Hoc Interpretability"

18 / 18 papers shown
Title
Multi-Objective Optimization of Performance and Interpretability of
  Tabular Supervised Machine Learning Models
Multi-Objective Optimization of Performance and Interpretability of Tabular Supervised Machine Learning Models
Lennart Schneider
B. Bischl
Janek Thomas
96
7
0
17 Jul 2023
Improving Interpretability via Explicit Word Interaction Graph Layer
Improving Interpretability via Explicit Word Interaction Graph Layer
Arshdeep Sekhon
Hanjie Chen
A. Shrivastava
Zhe Wang
Yangfeng Ji
Yanjun Qi
AI4CEMILM
70
6
0
03 Feb 2023
Mind the Gap: Measuring Generalization Performance Across Multiple
  Objectives
Mind the Gap: Measuring Generalization Performance Across Multiple Objectives
Matthias Feurer
Katharina Eggensperger
Eddie Bergman
Florian Pfisterer
B. Bischl
Frank Hutter
116
5
0
08 Dec 2022
Comparing Explanation Methods for Traditional Machine Learning Models
  Part 1: An Overview of Current Methods and Quantifying Their Disagreement
Comparing Explanation Methods for Traditional Machine Learning Models Part 1: An Overview of Current Methods and Quantifying Their Disagreement
Montgomery Flora
Corey K. Potvin
A. McGovern
Shawn Handler
FAtt
97
18
0
16 Nov 2022
DALE: Differential Accumulated Local Effects for efficient and accurate
  global explanations
DALE: Differential Accumulated Local Effects for efficient and accurate global explanations
Vasilis Gkolemis
Theodore Dalamagas
Christos Diou
56
13
0
10 Oct 2022
From plane crashes to algorithmic harm: applicability of safety
  engineering frameworks for responsible ML
From plane crashes to algorithmic harm: applicability of safety engineering frameworks for responsible ML
Shalaleh Rismani
Renee Shelby
A. Smart
Edgar W. Jatho
Joshua A. Kroll
AJung Moon
Negar Rostamzadeh
106
42
0
06 Oct 2022
Explainability in Process Outcome Prediction: Guidelines to Obtain
  Interpretable and Faithful Models
Explainability in Process Outcome Prediction: Guidelines to Obtain Interpretable and Faithful Models
Alexander Stevens
Johannes De Smedt
XAIFaML
113
14
0
30 Mar 2022
Hyperparameter Optimization: Foundations, Algorithms, Best Practices and
  Open Challenges
Hyperparameter Optimization: Foundations, Algorithms, Best Practices and Open Challenges
B. Bischl
Martin Binder
Michel Lang
Tobias Pielok
Jakob Richter
...
Theresa Ullmann
Marc Becker
A. Boulesteix
Difan Deng
Marius Lindauer
250
514
0
13 Jul 2021
Explainable Artificial Intelligence Approaches: A Survey
Explainable Artificial Intelligence Approaches: A Survey
Sheikh Rabiul Islam
W. Eberle
S. Ghafoor
Mohiuddin Ahmed
XAI
87
104
0
23 Jan 2021
CDT: Cascading Decision Trees for Explainable Reinforcement Learning
CDT: Cascading Decision Trees for Explainable Reinforcement Learning
Zihan Ding
Pablo Hernandez-Leal
G. Ding
Changjian Li
Ruitong Huang
65
21
0
15 Nov 2020
Interpretable Machine Learning -- A Brief History, State-of-the-Art and
  Challenges
Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
AI4TSAI4CE
114
405
0
19 Oct 2020
The role of explainability in creating trustworthy artificial
  intelligence for health care: a comprehensive survey of the terminology,
  design choices, and evaluation strategies
The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies
A. Markus
J. Kors
P. Rijnbeek
91
471
0
31 Jul 2020
General Pitfalls of Model-Agnostic Interpretation Methods for Machine
  Learning Models
General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models
Christoph Molnar
Gunnar Konig
J. Herbinger
Timo Freiesleben
Susanne Dandl
Christian A. Scholbeck
Giuseppe Casalicchio
Moritz Grosse-Wentrup
B. Bischl
FAttAI4CE
85
138
0
08 Jul 2020
What Would You Ask the Machine Learning Model? Identification of User
  Needs for Model Explanations Based on Human-Model Conversations
What Would You Ask the Machine Learning Model? Identification of User Needs for Model Explanations Based on Human-Model Conversations
Michal Kuzba
P. Biecek
HAI
55
22
0
07 Feb 2020
Towards Quantification of Explainability in Explainable Artificial
  Intelligence Methods
Towards Quantification of Explainability in Explainable Artificial Intelligence Methods
Sheikh Rabiul Islam
W. Eberle
S. Ghafoor
XAI
82
43
0
22 Nov 2019
Multi-Objective Automatic Machine Learning with AutoxgboostMC
Multi-Objective Automatic Machine Learning with AutoxgboostMC
Florian Pfisterer
Stefan Coors
Janek Thomas
B. Bischl
65
17
0
28 Aug 2019
Proposed Guidelines for the Responsible Use of Explainable Machine
  Learning
Proposed Guidelines for the Responsible Use of Explainable Machine Learning
Patrick Hall
Navdeep Gill
N. Schmidt
SILMXAIFaML
77
29
0
08 Jun 2019
Disentangled Attribution Curves for Interpreting Random Forests and
  Boosted Trees
Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees
Summer Devlin
Chandan Singh
W. James Murdoch
Bin Yu
FAtt
62
14
0
18 May 2019
1