ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.07160
  4. Cited By
MAGIX: Model Agnostic Globally Interpretable Explanations

MAGIX: Model Agnostic Globally Interpretable Explanations

22 June 2017
Nikaash Puri
Piyush B. Gupta
Pratiksha Agarwal
Sukriti Verma
Balaji Krishnamurthy
    FAtt
ArXivPDFHTML

Papers citing "MAGIX: Model Agnostic Globally Interpretable Explanations"

9 / 9 papers shown
Title
The Road to Explainability is Paved with Bias: Measuring the Fairness of
  Explanations
The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations
Aparna Balagopalan
Haoran Zhang
Kimia Hamidieh
Thomas Hartvigsen
Frank Rudzicz
Marzyeh Ghassemi
38
78
0
06 May 2022
Why model why? Assessing the strengths and limitations of LIME
Why model why? Assessing the strengths and limitations of LIME
Jurgen Dieber
S. Kirrane
FAtt
26
97
0
30 Nov 2020
Interpretable Machine Learning -- A Brief History, State-of-the-Art and
  Challenges
Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
AI4TS
AI4CE
20
397
0
19 Oct 2020
Explaining Explanations: Axiomatic Feature Interactions for Deep
  Networks
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Joseph D. Janizek
Pascal Sturmfels
Su-In Lee
FAtt
30
143
0
10 Feb 2020
Interpretability of Blackbox Machine Learning Models through Dataview
  Extraction and Shadow Model creation
Interpretability of Blackbox Machine Learning Models through Dataview Extraction and Shadow Model creation
Rupam Patir
Shubham Singhal
C. Anantaram
Vikram Goyal
13
0
0
02 Feb 2020
LoRMIkA: Local rule-based model interpretability with k-optimal
  associations
LoRMIkA: Local rule-based model interpretability with k-optimal associations
Dilini Sewwandi Rajapaksha
Christoph Bergmeir
Wray L. Buntine
35
31
0
11 Aug 2019
Enhancing Decision Tree based Interpretation of Deep Neural Networks
  through L1-Orthogonal Regularization
Enhancing Decision Tree based Interpretation of Deep Neural Networks through L1-Orthogonal Regularization
Nina Schaaf
Marco F. Huber
Johannes Maucher
20
36
0
10 Apr 2019
SAFE ML: Surrogate Assisted Feature Extraction for Model Learning
SAFE ML: Surrogate Assisted Feature Extraction for Model Learning
Alicja Gosiewska
A. Gacek
Piotr Lubon
P. Biecek
15
5
0
28 Feb 2019
Contrastive Explanations with Local Foil Trees
Contrastive Explanations with Local Foil Trees
J. V. D. Waa
M. Robeer
J. Diggelen
Matthieu J. S. Brinkhuis
Mark Antonius Neerincx
FAtt
19
82
0
19 Jun 2018
1