ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.03867
  4. Cited By
Quantifying Model Complexity via Functional Decomposition for Better
  Post-Hoc Interpretability
v1v2 (latest)

Quantifying Model Complexity via Functional Decomposition for Better Post-Hoc Interpretability

8 April 2019
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
    FAtt
ArXiv (abs)PDFHTML

Papers citing "Quantifying Model Complexity via Functional Decomposition for Better Post-Hoc Interpretability"

29 / 29 papers shown
Title
Statistical Multicriteria Benchmarking via the GSD-Front
Statistical Multicriteria Benchmarking via the GSD-Front
Christoph Jansen
G. Schollmeyer
Julian Rodemann
Hannah Blocher
Thomas Augustin
92
4
0
06 Jun 2024
Position: A Call to Action for a Human-Centered AutoML Paradigm
Position: A Call to Action for a Human-Centered AutoML Paradigm
Marius Lindauer
Florian Karl
A. Klier
Julia Moosbauer
Alexander Tornede
Andreas Mueller
Frank Hutter
Matthias Feurer
Bernd Bischl
92
8
0
05 Jun 2024
Statistical inference using machine learning and classical techniques
  based on accumulated local effects (ALE)
Statistical inference using machine learning and classical techniques based on accumulated local effects (ALE)
Chitu Okoli
56
3
0
15 Oct 2023
Multi-Objective Optimization of Performance and Interpretability of
  Tabular Supervised Machine Learning Models
Multi-Objective Optimization of Performance and Interpretability of Tabular Supervised Machine Learning Models
Lennart Schneider
B. Bischl
Janek Thomas
96
7
0
17 Jul 2023
Interpreting and generalizing deep learning in physics-based problems
  with functional linear models
Interpreting and generalizing deep learning in physics-based problems with functional linear models
Amirhossein Arzani
Lingxiao Yuan
P. Newell
Bei Wang
AI4CE
80
8
0
10 Jul 2023
SHAP-IQ: Unified Approximation of any-order Shapley Interactions
SHAP-IQ: Unified Approximation of any-order Shapley Interactions
Fabian Fumagalli
Maximilian Muschalik
Patrick Kolpaczki
Eyke Hüllermeier
Barbara Hammer
129
30
0
02 Mar 2023
Improving Interpretability via Explicit Word Interaction Graph Layer
Improving Interpretability via Explicit Word Interaction Graph Layer
Arshdeep Sekhon
Hanjie Chen
A. Shrivastava
Zhe Wang
Yangfeng Ji
Yanjun Qi
AI4CEMILM
70
6
0
03 Feb 2023
Mind the Gap: Measuring Generalization Performance Across Multiple
  Objectives
Mind the Gap: Measuring Generalization Performance Across Multiple Objectives
Matthias Feurer
Katharina Eggensperger
Eddie Bergman
Florian Pfisterer
B. Bischl
Frank Hutter
116
5
0
08 Dec 2022
Comparing Explanation Methods for Traditional Machine Learning Models
  Part 2: Quantifying Model Explainability Faithfulness and Improvements with
  Dimensionality Reduction
Comparing Explanation Methods for Traditional Machine Learning Models Part 2: Quantifying Model Explainability Faithfulness and Improvements with Dimensionality Reduction
Montgomery Flora
Corey K. Potvin
A. McGovern
Shawn Handler
FAtt
110
4
0
18 Nov 2022
Comparing Explanation Methods for Traditional Machine Learning Models
  Part 1: An Overview of Current Methods and Quantifying Their Disagreement
Comparing Explanation Methods for Traditional Machine Learning Models Part 1: An Overview of Current Methods and Quantifying Their Disagreement
Montgomery Flora
Corey K. Potvin
A. McGovern
Shawn Handler
FAtt
97
18
0
16 Nov 2022
DALE: Differential Accumulated Local Effects for efficient and accurate
  global explanations
DALE: Differential Accumulated Local Effects for efficient and accurate global explanations
Vasilis Gkolemis
Theodore Dalamagas
Christos Diou
56
13
0
10 Oct 2022
From plane crashes to algorithmic harm: applicability of safety
  engineering frameworks for responsible ML
From plane crashes to algorithmic harm: applicability of safety engineering frameworks for responsible ML
Shalaleh Rismani
Renee Shelby
A. Smart
Edgar W. Jatho
Joshua A. Kroll
AJung Moon
Negar Rostamzadeh
106
42
0
06 Oct 2022
Machine Learning Workflow to Explain Black-box Models for Early
  Alzheimer's Disease Classification Evaluated for Multiple Datasets
Machine Learning Workflow to Explain Black-box Models for Early Alzheimer's Disease Classification Evaluated for Multiple Datasets
Louise Bloch
Christoph M. Friedrich
28
14
0
12 May 2022
A Collection of Quality Diversity Optimization Problems Derived from
  Hyperparameter Optimization of Machine Learning Models
A Collection of Quality Diversity Optimization Problems Derived from Hyperparameter Optimization of Machine Learning Models
Lennart Schneider
Florian Pfisterer
Janek Thomas
B. Bischl
72
3
0
28 Apr 2022
Explainability in Process Outcome Prediction: Guidelines to Obtain
  Interpretable and Faithful Models
Explainability in Process Outcome Prediction: Guidelines to Obtain Interpretable and Faithful Models
Alexander Stevens
Johannes De Smedt
XAIFaML
113
14
0
30 Mar 2022
Marginal Effects for Non-Linear Prediction Functions
Marginal Effects for Non-Linear Prediction Functions
Christian A. Scholbeck
Giuseppe Casalicchio
Christoph Molnar
Bernd Bischl
C. Heumann
FAtt
16
11
0
21 Jan 2022
Application of Machine Learning Methods in Inferring Surface Water
  Groundwater Exchanges using High Temporal Resolution Temperature Measurements
Application of Machine Learning Methods in Inferring Surface Water Groundwater Exchanges using High Temporal Resolution Temperature Measurements
Mohammad A. Moghaddam
T. Ferré
Xingyuan Chen
Kewei Chen
M. Ehsani
AI4CE
48
4
0
03 Jan 2022
YAHPO Gym -- An Efficient Multi-Objective Multi-Fidelity Benchmark for
  Hyperparameter Optimization
YAHPO Gym -- An Efficient Multi-Objective Multi-Fidelity Benchmark for Hyperparameter Optimization
Florian Pfisterer
Lennart Schneider
Julia Moosbauer
Martin Binder
B. Bischl
82
39
0
08 Sep 2021
Hyperparameter Optimization: Foundations, Algorithms, Best Practices and
  Open Challenges
Hyperparameter Optimization: Foundations, Algorithms, Best Practices and Open Challenges
B. Bischl
Martin Binder
Michel Lang
Tobias Pielok
Jakob Richter
...
Theresa Ullmann
Marc Becker
A. Boulesteix
Difan Deng
Marius Lindauer
250
514
0
13 Jul 2021
Explainable Artificial Intelligence Approaches: A Survey
Explainable Artificial Intelligence Approaches: A Survey
Sheikh Rabiul Islam
W. Eberle
S. Ghafoor
Mohiuddin Ahmed
XAI
87
104
0
23 Jan 2021
CDT: Cascading Decision Trees for Explainable Reinforcement Learning
CDT: Cascading Decision Trees for Explainable Reinforcement Learning
Zihan Ding
Pablo Hernandez-Leal
G. Ding
Changjian Li
Ruitong Huang
65
21
0
15 Nov 2020
Interpretable Machine Learning -- A Brief History, State-of-the-Art and
  Challenges
Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
AI4TSAI4CE
114
405
0
19 Oct 2020
The role of explainability in creating trustworthy artificial
  intelligence for health care: a comprehensive survey of the terminology,
  design choices, and evaluation strategies
The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies
A. Markus
J. Kors
P. Rijnbeek
91
472
0
31 Jul 2020
General Pitfalls of Model-Agnostic Interpretation Methods for Machine
  Learning Models
General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models
Christoph Molnar
Gunnar Konig
J. Herbinger
Timo Freiesleben
Susanne Dandl
Christian A. Scholbeck
Giuseppe Casalicchio
Moritz Grosse-Wentrup
B. Bischl
FAttAI4CE
85
140
0
08 Jul 2020
What Would You Ask the Machine Learning Model? Identification of User
  Needs for Model Explanations Based on Human-Model Conversations
What Would You Ask the Machine Learning Model? Identification of User Needs for Model Explanations Based on Human-Model Conversations
Michal Kuzba
P. Biecek
HAI
55
22
0
07 Feb 2020
Towards Quantification of Explainability in Explainable Artificial
  Intelligence Methods
Towards Quantification of Explainability in Explainable Artificial Intelligence Methods
Sheikh Rabiul Islam
W. Eberle
S. Ghafoor
XAI
82
43
0
22 Nov 2019
Multi-Objective Automatic Machine Learning with AutoxgboostMC
Multi-Objective Automatic Machine Learning with AutoxgboostMC
Florian Pfisterer
Stefan Coors
Janek Thomas
B. Bischl
65
17
0
28 Aug 2019
Proposed Guidelines for the Responsible Use of Explainable Machine
  Learning
Proposed Guidelines for the Responsible Use of Explainable Machine Learning
Patrick Hall
Navdeep Gill
N. Schmidt
SILMXAIFaML
80
29
0
08 Jun 2019
Disentangled Attribution Curves for Interpreting Random Forests and
  Boosted Trees
Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees
Summer Devlin
Chandan Singh
W. James Murdoch
Bin Yu
FAtt
62
14
0
18 May 2019
1