ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.04131
  4. Cited By
General Pitfalls of Model-Agnostic Interpretation Methods for Machine
  Learning Models

General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models

8 July 2020
Christoph Molnar
Gunnar Konig
J. Herbinger
Timo Freiesleben
Susanne Dandl
Christian A. Scholbeck
Giuseppe Casalicchio
Moritz Grosse-Wentrup
B. Bischl
    FAtt
    AI4CE
ArXivPDFHTML

Papers citing "General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models"

26 / 26 papers shown
Title
In defence of post-hoc explanations in medical AI
In defence of post-hoc explanations in medical AI
Joshua Hatherley
Lauritz Munch
Jens Christian Bjerring
32
0
0
29 Apr 2025
What's Wrong with Your Synthetic Tabular Data? Using Explainable AI to Evaluate Generative Models
What's Wrong with Your Synthetic Tabular Data? Using Explainable AI to Evaluate Generative Models
Jan Kapar
Niklas Koenen
Martin Jullum
64
0
0
29 Apr 2025
ODExAI: A Comprehensive Object Detection Explainable AI Evaluation
ODExAI: A Comprehensive Object Detection Explainable AI Evaluation
Loc Phuc Truong Nguyen
Hung Truong Thanh Nguyen
Hung Cao
68
0
0
27 Apr 2025
The Curious Case of Arbitrariness in Machine Learning
Prakhar Ganesh
Afaf Taik
G. Farnadi
59
2
0
28 Jan 2025
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Lukas Klein
Carsten T. Lüth
U. Schlegel
Till J. Bungert
Mennatallah El-Assady
Paul F. Jäger
XAI
ELM
42
2
0
03 Jan 2025
User-centric evaluation of explainability of AI with and for humans: a
  comprehensive empirical study
User-centric evaluation of explainability of AI with and for humans: a comprehensive empirical study
Szymon Bobek
Paloma Korycińska
Monika Krakowska
Maciej Mozolewski
Dorota Rak
Magdalena Zych
Magdalena Wójcik
Grzegorz J. Nalepa
ELM
32
1
0
21 Oct 2024
CHILLI: A data context-aware perturbation method for XAI
CHILLI: A data context-aware perturbation method for XAI
Saif Anwar
Nathan Griffiths
A. Bhalerao
T. Popham
35
0
0
10 Jul 2024
Why You Should Not Trust Interpretations in Machine Learning:
  Adversarial Attacks on Partial Dependence Plots
Why You Should Not Trust Interpretations in Machine Learning: Adversarial Attacks on Partial Dependence Plots
Xi Xin
Giles Hooker
Fei Huang
AAML
38
6
0
29 Apr 2024
Variable Importance in High-Dimensional Settings Requires Grouping
Variable Importance in High-Dimensional Settings Requires Grouping
Ahmad Chamma
Bertrand Thirion
D. Engemann
43
4
0
18 Dec 2023
Machine Learning For An Explainable Cost Prediction of Medical Insurance
Machine Learning For An Explainable Cost Prediction of Medical Insurance
U. Orji
Elochukwu A. Ukwandu
26
31
0
23 Nov 2023
On the Relationship Between Interpretability and Explainability in
  Machine Learning
On the Relationship Between Interpretability and Explainability in Machine Learning
Benjamin Leblanc
Pascal Germain
FaML
26
0
0
20 Nov 2023
Statistically Valid Variable Importance Assessment through Conditional
  Permutations
Statistically Valid Variable Importance Assessment through Conditional Permutations
Ahmad Chamma
D. Engemann
Bertrand Thirion
20
11
0
14 Sep 2023
Confident Feature Ranking
Confident Feature Ranking
Bitya Neuhof
Y. Benjamini
FAtt
26
3
0
28 Jul 2023
Don't Lie to Me: Avoiding Malicious Explanations with STEALTH
Don't Lie to Me: Avoiding Malicious Explanations with STEALTH
Lauren Alvarez
Tim Menzies
26
2
0
25 Jan 2023
A Time Series Approach to Explainability for Neural Nets with
  Applications to Risk-Management and Fraud Detection
A Time Series Approach to Explainability for Neural Nets with Applications to Risk-Management and Fraud Detection
M. Wildi
Branka Hadji Misheva
AI4TS
17
1
0
06 Dec 2022
Comparing Explanation Methods for Traditional Machine Learning Models
  Part 2: Quantifying Model Explainability Faithfulness and Improvements with
  Dimensionality Reduction
Comparing Explanation Methods for Traditional Machine Learning Models Part 2: Quantifying Model Explainability Faithfulness and Improvements with Dimensionality Reduction
Montgomery Flora
Corey K. Potvin
A. McGovern
Shawn Handler
FAtt
26
4
0
18 Nov 2022
Algorithm-Agnostic Interpretations for Clustering
Algorithm-Agnostic Interpretations for Clustering
Christian A. Scholbeck
Henri Funk
Giuseppe Casalicchio
23
0
0
21 Sep 2022
REPID: Regional Effect Plots with implicit Interaction Detection
REPID: Regional Effect Plots with implicit Interaction Detection
J. Herbinger
Bernd Bischl
Giuseppe Casalicchio
FAtt
18
15
0
15 Feb 2022
Causal Explanations and XAI
Causal Explanations and XAI
Sander Beckers
CML
XAI
26
34
0
31 Jan 2022
GAM Changer: Editing Generalized Additive Models with Interactive
  Visualization
GAM Changer: Editing Generalized Additive Models with Interactive Visualization
Zijie J. Wang
Alex Kale
Harsha Nori
P. Stella
M. Nunnally
Duen Horng Chau
Mihaela Vorvoreanu
Jennifer Wortman Vaughan
R. Caruana
KELM
19
24
0
06 Dec 2021
Counterfactual Explanations for Models of Code
Counterfactual Explanations for Models of Code
Jürgen Cito
Işıl Dillig
V. Murali
S. Chandra
AAML
LRM
29
47
0
10 Nov 2021
Machine learning methods for prediction of cancer driver genes: a survey
  paper
Machine learning methods for prediction of cancer driver genes: a survey paper
R. Andrades
M. R. Mendoza
23
26
0
28 Sep 2021
How to avoid machine learning pitfalls: a guide for academic researchers
How to avoid machine learning pitfalls: a guide for academic researchers
M. Lones
VLM
FaML
OnRL
62
77
0
05 Aug 2021
Grouped Feature Importance and Combined Features Effect Plot
Grouped Feature Importance and Combined Features Effect Plot
Quay Au
J. Herbinger
Clemens Stachl
B. Bischl
Giuseppe Casalicchio
FAtt
45
44
0
23 Apr 2021
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
251
3,683
0
28 Feb 2017
Measuring and testing dependence by correlation of distances
Measuring and testing dependence by correlation of distances
G. Székely
Maria L. Rizzo
N. K. Bakirov
177
2,577
0
28 Mar 2008
1