ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.11371
  4. Cited By
Opportunities and Challenges in Explainable Artificial Intelligence
  (XAI): A Survey

Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey

16 June 2020
Arun Das
P. Rad
    XAI
ArXivPDFHTML

Papers citing "Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey"

50 / 79 papers shown
Title
Integrating Identity-Based Identification against Adaptive Adversaries in Federated Learning
Integrating Identity-Based Identification against Adaptive Adversaries in Federated Learning
Jakub Kacper Szelag
Ji-Jian Chin
Lauren Ansell
Sook-Chin Yip
29
0
0
03 Apr 2025
A Unified Framework with Novel Metrics for Evaluating the Effectiveness of XAI Techniques in LLMs
A Unified Framework with Novel Metrics for Evaluating the Effectiveness of XAI Techniques in LLMs
Melkamu Mersha
Mesay Gemeda Yigezu
Hassan Shakil
Ali Al shami
SangHyun Byun
Jugal Kalita
56
0
0
06 Mar 2025
Mapping Trustworthiness in Large Language Models: A Bibliometric Analysis Bridging Theory to Practice
Mapping Trustworthiness in Large Language Models: A Bibliometric Analysis Bridging Theory to Practice
José Antonio Siqueira de Cerqueira
Kai-Kristian Kemell
Muhammad Waseem
Rebekah A. Rousi
Nannan Xi
Juho Hamari
96
3
0
27 Feb 2025
From Abstract to Actionable: Pairwise Shapley Values for Explainable AI
From Abstract to Actionable: Pairwise Shapley Values for Explainable AI
Jiaxin Xu
Hung Chau
Angela Burden
TDI
48
0
0
18 Feb 2025
Coherent Local Explanations for Mathematical Optimization
Coherent Local Explanations for Mathematical Optimization
Daan Otto
Jannis Kurtz
S. Ilker Birbil
56
0
0
07 Feb 2025
Deontic Temporal Logic for Formal Verification of AI Ethics
Deontic Temporal Logic for Formal Verification of AI Ethics
Priya T.V.
Shrisha Rao
34
0
0
10 Jan 2025
Study on the Helpfulness of Explainable Artificial Intelligence
Study on the Helpfulness of Explainable Artificial Intelligence
Tobias Labarta
Elizaveta Kulicheva
Ronja Froelian
Christian Geißler
Xenia Melman
Julian von Klitzing
ELM
31
0
0
14 Oct 2024
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Melkamu Mersha
Khang Lam
Joseph Wood
Ali AlShami
Jugal Kalita
XAI
AI4TS
67
28
0
30 Aug 2024
Distilling Machine Learning's Added Value: Pareto Fronts in Atmospheric Applications
Distilling Machine Learning's Added Value: Pareto Fronts in Atmospheric Applications
Tom Beucler
Arthur Grundner
Sara Shamekh
Peter Ukkonen
Matthew Chantry
Ryan Lagerquist
43
0
0
04 Aug 2024
Enabling MCTS Explainability for Sequential Planning Through Computation
  Tree Logic
Enabling MCTS Explainability for Sequential Planning Through Computation Tree Logic
Ziyan An
Hendrik Baier
Abhishek Dubey
Ayan Mukhopadhyay
Meiyi Ma
LRM
24
3
0
15 Jul 2024
Implementing local-explainability in Gradient Boosting Trees: Feature
  Contribution
Implementing local-explainability in Gradient Boosting Trees: Feature Contribution
Ángel Delgado-Panadero
Beatriz Hernández-Lorca
María Teresa García-Ordás
J. Benítez-Andrades
32
52
0
14 Feb 2024
Black-Box Access is Insufficient for Rigorous AI Audits
Black-Box Access is Insufficient for Rigorous AI Audits
Stephen Casper
Carson Ezell
Charlotte Siegmann
Noam Kolt
Taylor Lynn Curtis
...
Michael Gerovitch
David Bau
Max Tegmark
David M. Krueger
Dylan Hadfield-Menell
AAML
17
76
0
25 Jan 2024
On the Relationship Between Interpretability and Explainability in
  Machine Learning
On the Relationship Between Interpretability and Explainability in Machine Learning
Benjamin Leblanc
Pascal Germain
FaML
24
0
0
20 Nov 2023
Path To Gain Functional Transparency In Artificial Intelligence With Meaningful Explainability
Path To Gain Functional Transparency In Artificial Intelligence With Meaningful Explainability
Md. Tanzib Hosain
Md. Mehedi Hasan Anik
Sadman Rafi̇
Rana Tabassum
Khaleque Insi̇a
Md. Mehrab Siddiky
15
6
0
13 Oct 2023
IDTraffickers: An Authorship Attribution Dataset to link and connect
  Potential Human-Trafficking Operations on Text Escort Advertisements
IDTraffickers: An Authorship Attribution Dataset to link and connect Potential Human-Trafficking Operations on Text Escort Advertisements
V. Saxena
Benjamin Bashpole
Gijs Van Dijck
Gerasimos Spanakis
40
2
0
09 Oct 2023
Explaining Deep Face Algorithms through Visualization: A Survey
Explaining Deep Face Algorithms through Visualization: A Survey
Thrupthi Ann
S. M. I. C. V. Balasubramanian
M. Jawahar
CVBM
32
1
0
26 Sep 2023
BELLA: Black box model Explanations by Local Linear Approximations
BELLA: Black box model Explanations by Local Linear Approximations
N. Radulovic
Albert Bifet
Fabian M. Suchanek
FAtt
29
1
0
18 May 2023
Technical Understanding from IML Hands-on Experience: A Study through a
  Public Event for Science Museum Visitors
Technical Understanding from IML Hands-on Experience: A Study through a Public Event for Science Museum Visitors
Wataru Kawabe
Yuri Nakao
Akihisa Shitara
Yusuke Sugano
26
1
0
10 May 2023
Explainability in AI Policies: A Critical Review of Communications,
  Reports, Regulations, and Standards in the EU, US, and UK
Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK
L. Nannini
Agathe Balayn
A. Smith
11
37
0
20 Apr 2023
Impact Of Explainable AI On Cognitive Load: Insights From An Empirical
  Study
Impact Of Explainable AI On Cognitive Load: Insights From An Empirical Study
L. Herm
13
22
0
18 Apr 2023
A Review on Explainable Artificial Intelligence for Healthcare: Why,
  How, and When?
A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?
M. Rubaiyat
Hossain Mondal
Prajoy Podder
13
56
0
10 Apr 2023
Neuro-Symbolic Execution of Generic Source Code
Neuro-Symbolic Execution of Generic Source Code
Yaojie Hu
Jin Tian
NAI
22
0
0
23 Mar 2023
Robot Navigation in Risky, Crowded Environments: Understanding Human
  Preferences
Robot Navigation in Risky, Crowded Environments: Understanding Human Preferences
A. Suresh
Angelique Taylor
L. Riek
Sonia Martínez
21
7
0
15 Mar 2023
A System's Approach Taxonomy for User-Centred XAI: A Survey
A System's Approach Taxonomy for User-Centred XAI: A Survey
Ehsan Emamirad
Pouya Ghiasnezhad Omran
A. Haller
S. Gregor
21
1
0
06 Mar 2023
sMRI-PatchNet: A novel explainable patch-based deep learning network for
  Alzheimer's disease diagnosis and discriminative atrophy localisation with
  Structural MRI
sMRI-PatchNet: A novel explainable patch-based deep learning network for Alzheimer's disease diagnosis and discriminative atrophy localisation with Structural MRI
Xin Zhang
Liangxiu Han
Lianghao Han
Haoming Chen
Darren Dancey
Daoqiang Zhang
MedIm
13
4
0
17 Feb 2023
Interpretability in Activation Space Analysis of Transformers: A Focused
  Survey
Interpretability in Activation Space Analysis of Transformers: A Focused Survey
Soniya Vijayakumar
AI4CE
27
3
0
22 Jan 2023
AI Security for Geoscience and Remote Sensing: Challenges and Future
  Trends
AI Security for Geoscience and Remote Sensing: Challenges and Future Trends
Yonghao Xu
Tao Bai
Weikang Yu
Shizhen Chang
P. M. Atkinson
Pedram Ghamisi
AAML
30
47
0
19 Dec 2022
On the Robustness of Explanations of Deep Neural Network Models: A
  Survey
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
Privacy Meets Explainability: A Comprehensive Impact Benchmark
Privacy Meets Explainability: A Comprehensive Impact Benchmark
S. Saifullah
Dominique Mercier
Adriano Lucieri
Andreas Dengel
Sheraz Ahmed
27
14
0
08 Nov 2022
Towards Procedural Fairness: Uncovering Biases in How a Toxic Language
  Classifier Uses Sentiment Information
Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
I. Nejadgholi
Esma Balkir
Kathleen C. Fraser
S. Kiritchenko
32
3
0
19 Oct 2022
On the Explainability of Natural Language Processing Deep Models
On the Explainability of Natural Language Processing Deep Models
Julia El Zini
M. Awad
25
82
0
13 Oct 2022
Explanations, Fairness, and Appropriate Reliance in Human-AI
  Decision-Making
Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Jakob Schoeffer
Maria De-Arteaga
Niklas Kuehl
FaML
36
45
0
23 Sep 2022
Explaining Anomalies using Denoising Autoencoders for Financial Tabular
  Data
Explaining Anomalies using Denoising Autoencoders for Financial Tabular Data
Timur Sattarov
Dayananda Herurkar
Jörn Hees
30
8
0
21 Sep 2022
Explainable AI for clinical and remote health applications: a survey on
  tabular and time series data
Explainable AI for clinical and remote health applications: a survey on tabular and time series data
Flavio Di Martino
Franca Delmastro
AI4TS
21
91
0
14 Sep 2022
Visualization Of Class Activation Maps To Explain AI Classification Of
  Network Packet Captures
Visualization Of Class Activation Maps To Explain AI Classification Of Network Packet Captures
Igor Cherepanov
Alex Ulmer
Jonathan Geraldi Joewono
Jörn Kohlhammer
FAtt
23
5
0
05 Sep 2022
An Artificial Intelligence Outlook for Colorectal Cancer Screening
An Artificial Intelligence Outlook for Colorectal Cancer Screening
P. Katrakazas
Aristotelis Ballas
M. Anisetti
I. Spais
16
0
0
05 Sep 2022
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Pedro C. Neto
Tiago B. Gonccalves
João Ribeiro Pinto
W. Silva
Ana F. Sequeira
Arun Ross
Jaime S. Cardoso
XAI
26
12
0
19 Aug 2022
Neural Payoff Machines: Predicting Fair and Stable Payoff Allocations
  Among Team Members
Neural Payoff Machines: Predicting Fair and Stable Payoff Allocations Among Team Members
Daphne Cornelisse
Thomas Rood
Mateusz Malinowski
Yoram Bachrach
Tal Kachman
33
10
0
18 Aug 2022
Object-ABN: Learning to Generate Sharp Attention Maps for Action
  Recognition
Object-ABN: Learning to Generate Sharp Attention Maps for Action Recognition
Tomoya Nitta
Tsubasa Hirakawa
H. Fujiyoshi
Toru Tamaki
52
0
0
27 Jul 2022
From Interpretable Filters to Predictions of Convolutional Neural
  Networks with Explainable Artificial Intelligence
From Interpretable Filters to Predictions of Convolutional Neural Networks with Explainable Artificial Intelligence
Shagufta Henna
Juan Miguel Lopez Alcaraz
FAtt
XAI
19
2
0
26 Jul 2022
A Survey of Explainable Graph Neural Networks: Taxonomy and Evaluation
  Metrics
A Survey of Explainable Graph Neural Networks: Taxonomy and Evaluation Metrics
Yiqiao Li
Jianlong Zhou
Sunny Verma
Fang Chen
XAI
24
39
0
26 Jul 2022
Stop ordering machine learning algorithms by their explainability! A
  user-centered investigation of performance and explainability
Stop ordering machine learning algorithms by their explainability! A user-centered investigation of performance and explainability
L. Herm
Kai Heinrich
Jonas Wanner
Christian Janiesch
11
84
0
20 Jun 2022
Benchmarking Heterogeneous Treatment Effect Models through the Lens of
  Interpretability
Benchmarking Heterogeneous Treatment Effect Models through the Lens of Interpretability
Jonathan Crabbé
Alicia Curth
Ioana Bica
M. Schaar
CML
20
15
0
16 Jun 2022
Challenges in Applying Explainability Methods to Improve the Fairness of
  NLP Models
Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Esma Balkir
S. Kiritchenko
I. Nejadgholi
Kathleen C. Fraser
21
36
0
08 Jun 2022
Explainable Artificial Intelligence (XAI) for Internet of Things: A
  Survey
Explainable Artificial Intelligence (XAI) for Internet of Things: A Survey
İbrahim Kök
Feyza Yıldırım Okay
Özgecan Muyanlı
S. Özdemir
XAI
12
51
0
07 Jun 2022
Attribution-based Explanations that Provide Recourse Cannot be Robust
Attribution-based Explanations that Provide Recourse Cannot be Robust
H. Fokkema
R. D. Heide
T. Erven
FAtt
42
18
0
31 May 2022
A Design Space for Explainable Ranking and Ranking Models
A Design Space for Explainable Ranking and Ranking Models
I. A. Hazwani
J. Schmid
M. Sachdeva
J. Bernard
XAI
23
3
0
27 May 2022
A Meta-Analysis of the Utility of Explainable Artificial Intelligence in
  Human-AI Decision-Making
A Meta-Analysis of the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making
Max Schemmer
Patrick Hemmer
Maximilian Nitsche
Niklas Kühl
Michael Vossing
19
55
0
10 May 2022
Perception Visualization: Seeing Through the Eyes of a DNN
Perception Visualization: Seeing Through the Eyes of a DNN
Loris Giulivi
Mark J. Carman
Giacomo Boracchi
13
6
0
21 Apr 2022
Using Decision Tree as Local Interpretable Model in Autoencoder-based
  LIME
Using Decision Tree as Local Interpretable Model in Autoencoder-based LIME
Niloofar Ranjbar
Reza Safabakhsh
FAtt
10
5
0
07 Apr 2022
12
Next