ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.11371
  4. Cited By
Opportunities and Challenges in Explainable Artificial Intelligence
  (XAI): A Survey

Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey

16 June 2020
Arun Das
P. Rad
    XAI
ArXivPDFHTML

Papers citing "Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey"

37 / 87 papers shown
Title
A Meta-Analysis of the Utility of Explainable Artificial Intelligence in
  Human-AI Decision-Making
A Meta-Analysis of the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making
Max Schemmer
Patrick Hemmer
Maximilian Nitsche
Niklas Kühl
Michael Vossing
19
55
0
10 May 2022
Perception Visualization: Seeing Through the Eyes of a DNN
Perception Visualization: Seeing Through the Eyes of a DNN
Loris Giulivi
Mark J. Carman
Giacomo Boracchi
18
6
0
21 Apr 2022
Using Decision Tree as Local Interpretable Model in Autoencoder-based
  LIME
Using Decision Tree as Local Interpretable Model in Autoencoder-based LIME
Niloofar Ranjbar
Reza Safabakhsh
FAtt
18
5
0
07 Apr 2022
Explainability in reinforcement learning: perspective and position
Explainability in reinforcement learning: perspective and position
Agneza Krajna
Mario Brčič
T. Lipić
Juraj Dončević
28
27
0
22 Mar 2022
Human-Centric Artificial Intelligence Architecture for Industry 5.0
  Applications
Human-Centric Artificial Intelligence Architecture for Industry 5.0 Applications
Jovze M. Rovzanec
I. Novalija
Patrik Zajec
K. Kenda
Hooman Tavakoli
...
G. Sofianidis
Spyros Theodoropoulos
Blavz Fortuna
Dunja Mladenić
John Soldatos
3DV
AI4CE
32
119
0
21 Mar 2022
A Survey on Privacy for B5G/6G: New Privacy Challenges, and Research
  Directions
A Survey on Privacy for B5G/6G: New Privacy Challenges, and Research Directions
Chamara Sandeepa
Bartlomiej Siniarski
N. Kourtellis
Shen Wang
Madhusanka Liyanage
29
21
0
08 Mar 2022
Label-Free Explainability for Unsupervised Models
Label-Free Explainability for Unsupervised Models
Jonathan Crabbé
M. Schaar
FAtt
MILM
19
22
0
03 Mar 2022
Time to Focus: A Comprehensive Benchmark Using Time Series Attribution
  Methods
Time to Focus: A Comprehensive Benchmark Using Time Series Attribution Methods
Dominique Mercier
Jwalin Bhatt
Andreas Dengel
Sheraz Ahmed
AI4TS
22
11
0
08 Feb 2022
Investigating the fidelity of explainable artificial intelligence
  methods for applications of convolutional neural networks in geoscience
Investigating the fidelity of explainable artificial intelligence methods for applications of convolutional neural networks in geoscience
Antonios Mamalakis
E. Barnes
I. Ebert‐Uphoff
19
73
0
07 Feb 2022
SyntEO: Synthetic Data Set Generation for Earth Observation and Deep
  Learning -- Demonstrated for Offshore Wind Farm Detection
SyntEO: Synthetic Data Set Generation for Earth Observation and Deep Learning -- Demonstrated for Offshore Wind Farm Detection
Thorsten Hoeser
C. Kuenzer
35
19
0
06 Dec 2021
STEEX: Steering Counterfactual Explanations with Semantics
STEEX: Steering Counterfactual Explanations with Semantics
P. Jacob
Éloi Zablocki
H. Ben-younes
Mickaël Chen
P. Pérez
Matthieu Cord
19
43
0
17 Nov 2021
A Survey on AI Assurance
A Survey on AI Assurance
Feras A. Batarseh
Laura J. Freeman
29
65
0
15 Nov 2021
Revisiting Methods for Finding Influential Examples
Revisiting Methods for Finding Influential Examples
Karthikeyan K
Anders Søgaard
TDI
14
30
0
08 Nov 2021
Explaining Latent Representations with a Corpus of Examples
Explaining Latent Representations with a Corpus of Examples
Jonathan Crabbé
Zhaozhi Qian
F. Imrie
M. Schaar
FAtt
16
37
0
28 Oct 2021
A Framework for Learning to Request Rich and Contextually Useful
  Information from Humans
A Framework for Learning to Request Rich and Contextually Useful Information from Humans
Khanh Nguyen
Yonatan Bisk
Hal Daumé
41
16
0
14 Oct 2021
Explaining Bayesian Neural Networks
Explaining Bayesian Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Adelaida Creosteanu
Klaus-Robert Muller
Frederick Klauschen
Shinichi Nakajima
Marius Kloft
BDL
AAML
31
25
0
23 Aug 2021
A Framework and Benchmarking Study for Counterfactual Generating Methods
  on Tabular Data
A Framework and Benchmarking Study for Counterfactual Generating Methods on Tabular Data
Raphael Mazzine
David Martens
16
33
0
09 Jul 2021
General Board Game Concepts
General Board Game Concepts
Éric Piette
Matthew Stephenson
Dennis J. N. J. Soemers
C. Browne
16
13
0
02 Jul 2021
Synthetic Benchmarks for Scientific Research in Explainable Machine
  Learning
Synthetic Benchmarks for Scientific Research in Explainable Machine Learning
Yang Liu
Sujay Khandagale
Colin White
W. Neiswanger
34
65
0
23 Jun 2021
Entropy-based Logic Explanations of Neural Networks
Entropy-based Logic Explanations of Neural Networks
Pietro Barbiero
Gabriele Ciravegna
Francesco Giannini
Pietro Lió
Marco Gori
S. Melacci
FAtt
XAI
21
78
0
12 Jun 2021
Evaluating the Correctness of Explainable AI Algorithms for
  Classification
Evaluating the Correctness of Explainable AI Algorithms for Classification
Orcun Yalcin
Xiuyi Fan
Siyuan Liu
XAI
FAtt
16
15
0
20 May 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A
  Systematic Survey of Surveys on Methods and Concepts
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
26
184
0
15 May 2021
Neural Network Attribution Methods for Problems in Geoscience: A Novel
  Synthetic Benchmark Dataset
Neural Network Attribution Methods for Problems in Geoscience: A Novel Synthetic Benchmark Dataset
Antonios Mamalakis
I. Ebert‐Uphoff
E. Barnes
OOD
25
75
0
18 Mar 2021
Explanations in Autonomous Driving: A Survey
Explanations in Autonomous Driving: A Survey
Daniel Omeiza
Helena Webb
Marina Jirotka
Lars Kunze
11
214
0
09 Mar 2021
Counterfactuals and Causability in Explainable Artificial Intelligence:
  Theory, Algorithms, and Applications
Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications
Yu-Liang Chou
Catarina Moreira
P. Bruza
Chun Ouyang
Joaquim A. Jorge
CML
44
176
0
07 Mar 2021
Ensembles of Random SHAPs
Ensembles of Random SHAPs
Lev V. Utkin
A. Konstantinov
FAtt
16
20
0
04 Mar 2021
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders
  of Interpretable Machine Learning and their Needs
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Harini Suresh
Steven R. Gomez
K. Nam
Arvind Satyanarayan
34
126
0
24 Jan 2021
Explainability of deep vision-based autonomous driving systems: Review
  and challenges
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
37
169
0
13 Jan 2021
Probing Model Signal-Awareness via Prediction-Preserving Input
  Minimization
Probing Model Signal-Awareness via Prediction-Preserving Input Minimization
Sahil Suneja
Yunhui Zheng
Yufan Zhuang
Jim Laredo
Alessandro Morari
AAML
24
32
0
25 Nov 2020
The Intriguing Relation Between Counterfactual Explanations and
  Adversarial Examples
The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples
Timo Freiesleben
GAN
33
62
0
11 Sep 2020
Learning from Few Samples: A Survey
Learning from Few Samples: A Survey
Nihar Bendre
Hugo Terashima-Marín
Peyman Najafirad
VLM
BDL
24
54
0
30 Jul 2020
General Pitfalls of Model-Agnostic Interpretation Methods for Machine
  Learning Models
General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models
Christoph Molnar
Gunnar Konig
J. Herbinger
Timo Freiesleben
Susanne Dandl
Christian A. Scholbeck
Giuseppe Casalicchio
Moritz Grosse-Wentrup
B. Bischl
FAtt
AI4CE
13
135
0
08 Jul 2020
Counterfactual explanation of machine learning survival models
Counterfactual explanation of machine learning survival models
M. Kovalev
Lev V. Utkin
CML
OffRL
24
19
0
26 Jun 2020
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
251
3,683
0
28 Feb 2017
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
261
3,109
0
04 Nov 2016
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
287
5,835
0
08 Jul 2016
Previous
12