ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.00069
  4. Cited By
Explaining Explanations: An Overview of Interpretability of Machine
  Learning

Explaining Explanations: An Overview of Interpretability of Machine Learning

31 May 2018
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
    XAI
ArXivPDFHTML

Papers citing "Explaining Explanations: An Overview of Interpretability of Machine Learning"

50 / 168 papers shown
Title
Integrating Earth Observation Data into Causal Inference: Challenges and
  Opportunities
Integrating Earth Observation Data into Causal Inference: Challenges and Opportunities
Connor Jerzak
Fredrik D. Johansson
Adel Daoud
CML
33
11
0
30 Jan 2023
Understanding the Role of Human Intuition on Reliance in Human-AI
  Decision-Making with Explanations
Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Valerie Chen
Q. V. Liao
Jennifer Wortman Vaughan
Gagan Bansal
36
103
0
18 Jan 2023
On the explainability of quantum neural networks based on variational
  quantum circuits
On the explainability of quantum neural networks based on variational quantum circuits
Ammar Daskin
MLT
FAtt
23
2
0
12 Jan 2023
Mapping Knowledge Representations to Concepts: A Review and New
  Perspectives
Mapping Knowledge Representations to Concepts: A Review and New Perspectives
Lars Holmberg
P. Davidsson
Per Linde
26
1
0
31 Dec 2022
The State of the Art in Enhancing Trust in Machine Learning Models with
  the Use of Visualizations
The State of the Art in Enhancing Trust in Machine Learning Models with the Use of Visualizations
Angelos Chatzimparmpas
R. Martins
I. Jusufi
K. Kucher
Fabrice Rossi
A. Kerren
FAtt
24
160
0
22 Dec 2022
Bort: Towards Explainable Neural Networks with Bounded Orthogonal
  Constraint
Bort: Towards Explainable Neural Networks with Bounded Orthogonal Constraint
Borui Zhang
Wenzhao Zheng
Jie Zhou
Jiwen Lu
AAML
23
7
0
18 Dec 2022
Manifestations of Xenophobia in AI Systems
Manifestations of Xenophobia in AI Systems
Nenad Tomašev
J. L. Maynard
Iason Gabriel
24
9
0
15 Dec 2022
Interpretable ML for Imbalanced Data
Interpretable ML for Imbalanced Data
Damien Dablain
C. Bellinger
Bartosz Krawczyk
D. Aha
Nitesh V. Chawla
22
1
0
15 Dec 2022
The Role of AI in Drug Discovery: Challenges, Opportunities, and
  Strategies
The Role of AI in Drug Discovery: Challenges, Opportunities, and Strategies
Alexandre Blanco-Gonzalez
Alfonso Cabezon
Alejandro Seco-Gonzalez
Daniel Conde-Torres
Paula Antelo-Riveiro
Ángel Piñeiro
R. García‐Fandiño
14
254
0
08 Dec 2022
A Modality-level Explainable Framework for Misinformation Checking in
  Social Networks
A Modality-level Explainable Framework for Misinformation Checking in Social Networks
Vítor Lourencco
A. Paes
22
3
0
08 Dec 2022
Towards Explainability in Modular Autonomous Vehicle Software
Towards Explainability in Modular Autonomous Vehicle Software
Hongrui Zheng
Zirui Zang
Shuo Yang
Rahul Mangharam
25
0
0
01 Dec 2022
Explainable Artificial Intelligence: Precepts, Methods, and
  Opportunities for Research in Construction
Explainable Artificial Intelligence: Precepts, Methods, and Opportunities for Research in Construction
Peter E. D. Love
Weili Fang
J. Matthews
Stuart Porter
Hanbin Luo
L. Ding
XAI
29
7
0
12 Nov 2022
REVEL Framework to measure Local Linear Explanations for black-box
  models: Deep Learning Image Classification case of study
REVEL Framework to measure Local Linear Explanations for black-box models: Deep Learning Image Classification case of study
Iván Sevillano-García
Julián Luengo-Martín
Francisco Herrera
XAI
FAtt
19
7
0
11 Nov 2022
A $k$-additive Choquet integral-based approach to approximate the SHAP
  values for local interpretability in machine learning
A kkk-additive Choquet integral-based approach to approximate the SHAP values for local interpretability in machine learning
G. D. Pelegrina
L. Duarte
M. Grabisch
FAtt
TDI
33
27
0
03 Nov 2022
Machine Learning in Transaction Monitoring: The Prospect of xAI
Machine Learning in Transaction Monitoring: The Prospect of xAI
Julie Gerlings
Ioanna D. Constantiou
17
2
0
14 Oct 2022
On the Explainability of Natural Language Processing Deep Models
On the Explainability of Natural Language Processing Deep Models
Julia El Zini
M. Awad
25
82
0
13 Oct 2022
Quantitative Metrics for Evaluating Explanations of Video DeepFake
  Detectors
Quantitative Metrics for Evaluating Explanations of Video DeepFake Detectors
Federico Baldassarre
Quentin Debard
Gonzalo Fiz Pontiveros
Tri Kurniawan Wijaya
36
4
0
07 Oct 2022
Explanations, Fairness, and Appropriate Reliance in Human-AI
  Decision-Making
Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Jakob Schoeffer
Maria De-Arteaga
Niklas Kuehl
FaML
36
45
0
23 Sep 2022
Quantile-constrained Wasserstein projections for robust interpretability
  of numerical and machine learning models
Quantile-constrained Wasserstein projections for robust interpretability of numerical and machine learning models
Marouane Il Idrissi
Nicolas Bousquet
Fabrice Gamboa
Bertrand Iooss
Jean-Michel Loubes
29
2
0
23 Sep 2022
"Mama Always Had a Way of Explaining Things So I Could Understand'': A
  Dialogue Corpus for Learning to Construct Explanations
"Mama Always Had a Way of Explaining Things So I Could Understand'': A Dialogue Corpus for Learning to Construct Explanations
Henning Wachsmuth
Milad Alshomary
24
12
0
06 Sep 2022
Making the black-box brighter: interpreting machine learning algorithm
  for forecasting drilling accidents
Making the black-box brighter: interpreting machine learning algorithm for forecasting drilling accidents
E. Gurina
Nikita Klyuchnikov
Ksenia Antipova
D. Koroteev
FAtt
25
8
0
06 Sep 2022
Interpretable Fake News Detection with Topic and Deep Variational Models
Interpretable Fake News Detection with Topic and Deep Variational Models
Marjan Hosseini
Alireza Javadian Sabet
Suining He
Derek Aguiar
19
20
0
04 Sep 2022
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Pedro C. Neto
Tiago B. Gonccalves
João Ribeiro Pinto
W. Silva
Ana F. Sequeira
Arun Ross
Jaime S. Cardoso
XAI
26
12
0
19 Aug 2022
An Empirical Comparison of Explainable Artificial Intelligence Methods
  for Clinical Data: A Case Study on Traumatic Brain Injury
An Empirical Comparison of Explainable Artificial Intelligence Methods for Clinical Data: A Case Study on Traumatic Brain Injury
Amin Nayebi
Sindhu Tipirneni
Brandon Foreman
Chandan K. Reddy
V. Subbian
24
3
0
13 Aug 2022
Learning an Interpretable Model for Driver Behavior Prediction with
  Inductive Biases
Learning an Interpretable Model for Driver Behavior Prediction with Inductive Biases
Salar Arbabi
D. Tavernini
Saber Fallah
Richard Bowden
30
7
0
31 Jul 2022
Mediators: Conversational Agents Explaining NLP Model Behavior
Mediators: Conversational Agents Explaining NLP Model Behavior
Nils Feldhus
A. Ravichandran
Sebastian Möller
27
16
0
13 Jun 2022
Explainable Artificial Intelligence (XAI) for Internet of Things: A
  Survey
Explainable Artificial Intelligence (XAI) for Internet of Things: A Survey
İbrahim Kök
Feyza Yıldırım Okay
Özgecan Muyanlı
S. Özdemir
XAI
12
51
0
07 Jun 2022
What You See is What You Classify: Black Box Attributions
What You See is What You Classify: Black Box Attributions
Steven Stalder
Nathanael Perraudin
R. Achanta
F. Pérez-Cruz
Michele Volpi
FAtt
24
9
0
23 May 2022
How Platform-User Power Relations Shape Algorithmic Accountability: A
  Case Study of Instant Loan Platforms and Financially Stressed Users in India
How Platform-User Power Relations Shape Algorithmic Accountability: A Case Study of Instant Loan Platforms and Financially Stressed Users in India
Divya Ramesh
Vaishnav Kameswaran
Ding-wen Wang
Nithya Sambasivan
19
35
0
11 May 2022
Human-AI Collaboration via Conditional Delegation: A Case Study of
  Content Moderation
Human-AI Collaboration via Conditional Delegation: A Case Study of Content Moderation
Vivian Lai
Samuel Carton
Rajat Bhatnagar
Vera Liao
Yunfeng Zhang
Chenhao Tan
18
129
0
25 Apr 2022
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And
  Dataset
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And Dataset
Leon Sixt
M. Schuessler
Oana-Iuliana Popescu
Philipp Weiß
Tim Landgraf
FAtt
24
14
0
25 Apr 2022
CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations
CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations
Leonard Salewski
A. Sophia Koepke
Hendrik P. A. Lensch
Zeynep Akata
LRM
NAI
25
20
0
05 Apr 2022
ConceptExplainer: Interactive Explanation for Deep Neural Networks from
  a Concept Perspective
ConceptExplainer: Interactive Explanation for Deep Neural Networks from a Concept Perspective
Jinbin Huang
Aditi Mishra
Bum Chul Kwon
Chris Bryan
FAtt
HAI
34
31
0
04 Apr 2022
Concept Embedding Analysis: A Review
Concept Embedding Analysis: A Review
Gesina Schwalbe
19
28
0
25 Mar 2022
Improving Health Mentioning Classification of Tweets using Contrastive
  Adversarial Training
Improving Health Mentioning Classification of Tweets using Contrastive Adversarial Training
Pervaiz Iqbal Khan
Shoaib Ahmed Siddiqui
Imran Razzak
Andreas Dengel
Sheraz Ahmed
13
3
0
03 Mar 2022
NeuroView-RNN: It's About Time
NeuroView-RNN: It's About Time
C. Barberan
Sina Alemohammad
Naiming Liu
Randall Balestriero
Richard G. Baraniuk
AI4TS
HAI
33
2
0
23 Feb 2022
Listen to Interpret: Post-hoc Interpretability for Audio Networks with
  NMF
Listen to Interpret: Post-hoc Interpretability for Audio Networks with NMF
Jayneel Parekh
Sanjeel Parekh
Pavlo Mozharovskyi
Florence dÁlché-Buc
G. Richard
11
22
0
23 Feb 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
172
185
0
03 Feb 2022
Visualizing Automatic Speech Recognition -- Means for a Better
  Understanding?
Visualizing Automatic Speech Recognition -- Means for a Better Understanding?
Karla Markert
Romain Parracone
Mykhailo Kulakov
Philip Sperl
Ching-yu Kao
Konstantin Böttinger
11
8
0
01 Feb 2022
Black-box Error Diagnosis in Deep Neural Networks for Computer Vision: a
  Survey of Tools
Black-box Error Diagnosis in Deep Neural Networks for Computer Vision: a Survey of Tools
Piero Fraternali
Federico Milani
Rocio Nahime Torres
Niccolò Zangrando
AAML
25
9
0
17 Jan 2022
Explainability Is in the Mind of the Beholder: Establishing the
  Foundations of Explainable Artificial Intelligence
Explainability Is in the Mind of the Beholder: Establishing the Foundations of Explainable Artificial Intelligence
Kacper Sokol
Peter A. Flach
31
20
0
29 Dec 2021
AcME -- Accelerated Model-agnostic Explanations: Fast Whitening of the
  Machine-Learning Black Box
AcME -- Accelerated Model-agnostic Explanations: Fast Whitening of the Machine-Learning Black Box
David Dandolo
Chiara Masiero
Mattia Carletti
Davide Dalle Pezze
Gian Antonio Susto
FAtt
LRM
22
22
0
23 Dec 2021
Global explainability in aligned image modalities
Global explainability in aligned image modalities
Justin Engelmann
Amos Storkey
Miguel O. Bernabeu
FAtt
14
4
0
17 Dec 2021
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
58
114
0
06 Dec 2021
A Survey on the Robustness of Feature Importance and Counterfactual
  Explanations
A Survey on the Robustness of Feature Importance and Counterfactual Explanations
Saumitra Mishra
Sanghamitra Dutta
Jason Long
Daniele Magazzeni
AAML
9
58
0
30 Oct 2021
Learning Rich Nearest Neighbor Representations from Self-supervised
  Ensembles
Learning Rich Nearest Neighbor Representations from Self-supervised Ensembles
Bram Wallace
Devansh Arpit
Huan Wang
Caiming Xiong
SSL
OOD
22
0
0
19 Oct 2021
Interpretable Neural Networks with Frank-Wolfe: Sparse Relevance Maps
  and Relevance Orderings
Interpretable Neural Networks with Frank-Wolfe: Sparse Relevance Maps and Relevance Orderings
Jan Macdonald
Mathieu Besançon
S. Pokutta
27
11
0
15 Oct 2021
Foundations of Symbolic Languages for Model Interpretability
Foundations of Symbolic Languages for Model Interpretability
Marcelo Arenas
Daniel Baez
Pablo Barceló
Jorge A. Pérez
Bernardo Subercaseaux
ReLM
LRM
14
24
0
05 Oct 2021
Generating User-Centred Explanations via Illocutionary Question
  Answering: From Philosophy to Interfaces
Generating User-Centred Explanations via Illocutionary Question Answering: From Philosophy to Interfaces
Francesco Sovrano
F. Vitali
32
14
0
02 Oct 2021
LEMON: Explainable Entity Matching
LEMON: Explainable Entity Matching
Nils Barlaug
FAtt
AAML
12
9
0
01 Oct 2021
Previous
1234
Next