ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.08943
  4. Cited By
Comparing Explanation Methods for Traditional Machine Learning Models
  Part 1: An Overview of Current Methods and Quantifying Their Disagreement

Comparing Explanation Methods for Traditional Machine Learning Models Part 1: An Overview of Current Methods and Quantifying Their Disagreement

16 November 2022
Montgomery Flora
Corey K. Potvin
A. McGovern
Shawn Handler
    FAtt
ArXivPDFHTML

Papers citing "Comparing Explanation Methods for Traditional Machine Learning Models Part 1: An Overview of Current Methods and Quantifying Their Disagreement"

4 / 4 papers shown
Title
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Célia Wafa Ayad
Thomas Bonnier
Benjamin Bosch
Sonali Parbhoo
Jesse Read
FAtt
XAI
92
0
0
11 Feb 2025
Grouped Feature Importance and Combined Features Effect Plot
Grouped Feature Importance and Combined Features Effect Plot
Quay Au
J. Herbinger
Clemens Stachl
B. Bischl
Giuseppe Casalicchio
FAtt
39
44
0
23 Apr 2021
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
  Goals of Human Trust in AI
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
Alon Jacovi
Ana Marasović
Tim Miller
Yoav Goldberg
244
425
0
15 Oct 2020
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
227
3,681
0
28 Feb 2017
1