ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.10154
  4. Cited By
Stop Explaining Black Box Machine Learning Models for High Stakes
  Decisions and Use Interpretable Models Instead
v1v2v3 (latest)

Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead

26 November 2018
Cynthia Rudin
    ELMFaML
ArXiv (abs)PDFHTML

Papers citing "Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead"

50 / 55 papers shown
Title
Efficient and Interpretable Neural Networks Using Complex Lehmer Transform
M. Ataei
Xiaogang Wang
87
0
0
28 Jan 2025
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
127
16
0
27 Jul 2024
Black-Box Access is Insufficient for Rigorous AI Audits
Black-Box Access is Insufficient for Rigorous AI Audits
Stephen Casper
Carson Ezell
Charlotte Siegmann
Noam Kolt
Taylor Lynn Curtis
...
Michael Gerovitch
David Bau
Max Tegmark
David M. Krueger
Dylan Hadfield-Menell
AAML
143
94
0
25 Jan 2024
Faster Peace via Inclusivity: An Efficient Paradigm to Understand
  Populations in Conflict Zones
Faster Peace via Inclusivity: An Efficient Paradigm to Understand Populations in Conflict Zones
Jordan Bilich
Michael Varga
Daanish Masood
Andrew Konya
48
12
0
01 Nov 2023
A Survey on Explainability of Graph Neural Networks
A Survey on Explainability of Graph Neural Networks
Jaykumar Kakkad
Jaspal Jannu
Kartik Sharma
Charu C. Aggarwal
Sourav Medya
60
28
0
02 Jun 2023
BELLA: Black box model Explanations by Local Linear Approximations
BELLA: Black box model Explanations by Local Linear Approximations
N. Radulovic
Albert Bifet
Fabian M. Suchanek
FAtt
116
1
0
18 May 2023
Less is More: The Influence of Pruning on the Explainability of CNNs
Less is More: The Influence of Pruning on the Explainability of CNNs
David Weber
F. Merkle
Pascal Schöttle
Stephan Schlögl
Martin Nocker
FAtt
196
1
0
17 Feb 2023
CI-GNN: A Granger Causality-Inspired Graph Neural Network for
  Interpretable Brain Network-Based Psychiatric Diagnosis
CI-GNN: A Granger Causality-Inspired Graph Neural Network for Interpretable Brain Network-Based Psychiatric Diagnosis
Kaizhong Zheng
Shujian Yu
Badong Chen
CML
125
33
0
04 Jan 2023
Automated Learning of Interpretable Models with Quantified Uncertainty
Automated Learning of Interpretable Models with Quantified Uncertainty
Geoffrey F. Bomarito
Patrick E. Leser
N. Strauss
K. Garbrecht
J. D. Hochhalter
65
11
0
12 Apr 2022
Interpretation of Black Box NLP Models: A Survey
Interpretation of Black Box NLP Models: A Survey
Shivani Choudhary
N. Chatterjee
S. K. Saha
FAtt
86
11
0
31 Mar 2022
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Alon Jacovi
Jasmijn Bastings
Sebastian Gehrmann
Yoav Goldberg
Katja Filippova
105
17
0
27 Jan 2022
ProtGNN: Towards Self-Explaining Graph Neural Networks
ProtGNN: Towards Self-Explaining Graph Neural Networks
Zaixin Zhang
Qi Liu
Hao Wang
Chengqiang Lu
Chee-Kong Lee
71
130
0
02 Dec 2021
Interpreting and improving deep-learning models with reality checks
Interpreting and improving deep-learning models with reality checks
Chandan Singh
Wooseok Ha
Bin Yu
FAtt
84
3
0
16 Aug 2021
Attention, please! A survey of Neural Attention Models in Deep Learning
Attention, please! A survey of Neural Attention Models in Deep Learning
Alana de Santana Correia
Esther Luna Colombini
HAI
121
194
0
31 Mar 2021
Local Interpretations for Explainable Natural Language Processing: A
  Survey
Local Interpretations for Explainable Natural Language Processing: A Survey
Siwen Luo
Hamish Ivison
S. Han
Josiah Poon
MILM
120
50
0
20 Mar 2021
Towards an AI assistant for power grid operators
Towards an AI assistant for power grid operators
Antoine Marot
Alexandre Rozier
Matthieu Dussartre
Laure Crochepierre
Benjamin Donnot
AI4CE
59
10
0
03 Dec 2020
A Survey on the Explainability of Supervised Machine Learning
A Survey on the Explainability of Supervised Machine Learning
Nadia Burkart
Marco F. Huber
FaMLXAI
77
780
0
16 Nov 2020
Enforcing Interpretability and its Statistical Impacts: Trade-offs
  between Accuracy and Interpretability
Enforcing Interpretability and its Statistical Impacts: Trade-offs between Accuracy and Interpretability
Gintare Karolina Dziugaite
Shai Ben-David
Daniel M. Roy
FaML
37
40
0
26 Oct 2020
Generating End-to-End Adversarial Examples for Malware Classifiers Using
  Explainability
Generating End-to-End Adversarial Examples for Malware Classifiers Using Explainability
Ishai Rosenberg
Shai Meir
J. Berrebi
I. Gordon
Guillaume Sicard
Eli David
AAMLSILM
31
28
0
28 Sep 2020
Logic Programming and Machine Ethics
Logic Programming and Machine Ethics
Abeer Dyoub
Stefania Costantini
F. Lisi
41
11
0
22 Sep 2020
Region Comparison Network for Interpretable Few-shot Image
  Classification
Region Comparison Network for Interpretable Few-shot Image Classification
Z. Xue
Lixin Duan
Wen Li
Lin Chen
Jiebo Luo
61
16
0
08 Sep 2020
Conceptual Metaphors Impact Perceptions of Human-AI Collaboration
Conceptual Metaphors Impact Perceptions of Human-AI Collaboration
Pranav Khadpe
Ranjay Krishna
Fei-Fei Li
Jeffrey T. Hancock
Michael S. Bernstein
89
109
0
05 Aug 2020
Explaining Deep Neural Networks using Unsupervised Clustering
Explaining Deep Neural Networks using Unsupervised Clustering
Yu-Han Liu
Sercan O. Arik
SSLAI4CE
73
11
0
15 Jul 2020
Improving Workflow Integration with xPath: Design and Evaluation of a
  Human-AI Diagnosis System in Pathology
Improving Workflow Integration with xPath: Design and Evaluation of a Human-AI Diagnosis System in Pathology
H. Gu
Yuan Liang
Yifan Xu
Christopher Kazu Williams
S. Magaki
...
Wenzhong Yan
X. R. Zhang
Yang Li
Mohammad Haeri
Xiang Ánthony' Chen
91
31
0
23 Jun 2020
Towards Faithfully Interpretable NLP Systems: How should we define and
  evaluate faithfulness?
Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?
Alon Jacovi
Yoav Goldberg
XAI
136
600
0
07 Apr 2020
Causality-based Explanation of Classification Outcomes
Causality-based Explanation of Classification Outcomes
Leopoldo Bertossi
Jordan Li
Maximilian Schleich
Dan Suciu
Zografoula Vagena
XAICMLFAtt
248
46
0
15 Mar 2020
A general framework for scientifically inspired explanations in AI
A general framework for scientifically inspired explanations in AI
David Tuckey
A. Russo
Krysia Broda
30
0
0
02 Mar 2020
Learning Global Transparent Models Consistent with Local Contrastive
  Explanations
Learning Global Transparent Models Consistent with Local Contrastive Explanations
Tejaswini Pedapati
Avinash Balakrishnan
Karthikeyan Shanmugam
Amit Dhurandhar
FAtt
41
0
0
19 Feb 2020
Algorithmic Recourse: from Counterfactual Explanations to Interventions
Algorithmic Recourse: from Counterfactual Explanations to Interventions
Amir-Hossein Karimi
Bernhard Schölkopf
Isabel Valera
CML
76
346
0
14 Feb 2020
Exploring Benefits of Transfer Learning in Neural Machine Translation
Exploring Benefits of Transfer Learning in Neural Machine Translation
Tom Kocmi
57
17
0
06 Jan 2020
Dirichlet uncertainty wrappers for actionable algorithm accuracy
  accountability and auditability
Dirichlet uncertainty wrappers for actionable algorithm accuracy accountability and auditability
José Mena
O. Pujol
Jordi Vitrià
90
8
0
29 Dec 2019
AutoAIViz: Opening the Blackbox of Automated Artificial Intelligence
  with Conditional Parallel Coordinates
AutoAIViz: Opening the Blackbox of Automated Artificial Intelligence with Conditional Parallel Coordinates
D. Weidele
Justin D. Weisz
Eno Oduor
Michael J. Muller
Josh Andres
Alexander G. Gray
Dakuo Wang
94
54
0
13 Dec 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
162
6,366
0
22 Oct 2019
FACE: Feasible and Actionable Counterfactual Explanations
FACE: Feasible and Actionable Counterfactual Explanations
Rafael Poyiadzi
Kacper Sokol
Raúl Santos-Rodríguez
T. D. Bie
Peter A. Flach
84
371
0
20 Sep 2019
Towards a Rigorous Evaluation of XAI Methods on Time Series
Towards a Rigorous Evaluation of XAI Methods on Time Series
U. Schlegel
Hiba Arnout
Mennatallah El-Assady
Daniela Oelke
Daniel A. Keim
XAIAI4TS
108
174
0
16 Sep 2019
Learning Fair Rule Lists
Learning Fair Rule Lists
Ulrich Aïvodji
Julien Ferry
Sébastien Gambs
Marie-José Huguet
Mohamed Siala
FaML
55
11
0
09 Sep 2019
Visualizing Image Content to Explain Novel Image Discovery
Visualizing Image Content to Explain Novel Image Discovery
Jake H. Lee
K. Wagstaff
29
3
0
14 Aug 2019
Attention is not not Explanation
Attention is not not Explanation
Sarah Wiegreffe
Yuval Pinter
XAIAAMLFAtt
124
914
0
13 Aug 2019
Interpretable and Steerable Sequence Learning via Prototypes
Interpretable and Steerable Sequence Learning via Prototypes
Yao Ming
Panpan Xu
Huamin Qu
Liu Ren
AI4TS
66
141
0
23 Jul 2019
The Dangers of Post-hoc Interpretability: Unjustified Counterfactual
  Explanations
The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations
Thibault Laugel
Marie-Jeanne Lesot
Christophe Marsala
X. Renard
Marcin Detyniecki
85
197
0
22 Jul 2019
Forecasting remaining useful life: Interpretable deep learning approach
  via variational Bayesian inferences
Forecasting remaining useful life: Interpretable deep learning approach via variational Bayesian inferences
Mathias Kraus
Stefan Feuerriegel
54
110
0
11 Jul 2019
Global and Local Interpretability for Cardiac MRI Classification
Global and Local Interpretability for Cardiac MRI Classification
J. Clough
Ilkay Oksuz
Esther Puyol-Antón
B. Ruijsink
A. King
Julia A. Schnabel
92
60
0
14 Jun 2019
Understanding artificial intelligence ethics and safety
Understanding artificial intelligence ethics and safety
David Leslie
FaMLAI4TS
74
363
0
11 Jun 2019
Issues with post-hoc counterfactual explanations: a discussion
Issues with post-hoc counterfactual explanations: a discussion
Thibault Laugel
Marie-Jeanne Lesot
Christophe Marsala
Marcin Detyniecki
CML
139
45
0
11 Jun 2019
Proposed Guidelines for the Responsible Use of Explainable Machine
  Learning
Proposed Guidelines for the Responsible Use of Explainable Machine Learning
Patrick Hall
Navdeep Gill
N. Schmidt
SILMXAIFaML
77
29
0
08 Jun 2019
Model-Agnostic Counterfactual Explanations for Consequential Decisions
Model-Agnostic Counterfactual Explanations for Consequential Decisions
Amir-Hossein Karimi
Gilles Barthe
Borja Balle
Isabel Valera
107
323
0
27 May 2019
Hybrid Predictive Model: When an Interpretable Model Collaborates with a
  Black-box Model
Hybrid Predictive Model: When an Interpretable Model Collaborates with a Black-box Model
Tong Wang
Qihang Lin
139
19
0
10 May 2019
The Scientific Method in the Science of Machine Learning
The Scientific Method in the Science of Machine Learning
Jessica Zosa Forde
Michela Paganini
70
37
0
24 Apr 2019
Explaining Deep Classification of Time-Series Data with Learned
  Prototypes
Explaining Deep Classification of Time-Series Data with Learned Prototypes
Alan H. Gee
Diego Garcia-Olano
Joydeep Ghosh
D. Paydarfar
AI4TS
103
67
0
18 Apr 2019
"Why did you do that?": Explaining black box models with Inductive
  Synthesis
"Why did you do that?": Explaining black box models with Inductive Synthesis
Görkem Paçaci
David Johnson
S. McKeever
A. Hamfelt
35
6
0
17 Apr 2019
12
Next