Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2011.07876
Cited By
A Survey on the Explainability of Supervised Machine Learning
16 November 2020
Nadia Burkart
Marco F. Huber
FaML
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Survey on the Explainability of Supervised Machine Learning"
50 / 66 papers shown
Title
Threat Modeling for AI: The Case for an Asset-Centric Approach
Jose Sanchez Vicarte
Marcin Spoczynski
Mostafa Elsaid
29
0
0
08 May 2025
Framework GNN-AID: Graph Neural Network Analysis Interpretation and Defense
Kirill Lukyanov
Mikhail Drobyshevskiy
Georgii Sazonov
Mikhail Soloviov
Ilya Makarov
GNN
46
0
0
06 May 2025
CoCoAFusE: Beyond Mixtures of Experts via Model Fusion
Aurelio Raffa Ugolini
M. Tanelli
Valentina Breschi
MoE
24
0
0
02 May 2025
Promoting Security and Trust on Social Networks: Explainable Cyberbullying Detection Using Large Language Models in a Stream-Based Machine Learning Framework
Silvia García-Méndez
Francisco de Arriba-Pérez
17
0
0
07 Apr 2025
Surrogate Modeling for Explainable Predictive Time Series Corrections
Alfredo Lopez
Florian Sobieczky
AI4TS
43
0
0
17 Jan 2025
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers
Tobias Leemann
Alina Fastowski
Felix Pfeiffer
Gjergji Kasneci
59
4
0
10 Jan 2025
On the influence of dependent features in classification problems: a game-theoretic perspective
Laura Davila-Pena
Alejandro Saavedra-Nieves
Balbina Casas-Méndez
TDI
FAtt
15
0
0
05 Aug 2024
Explaining Graph Neural Networks for Node Similarity on Graphs
Daniel Daza
C. Chu
T. Tran
Daria Stepanova
Michael Cochez
Paul T. Groth
36
1
0
10 Jul 2024
Introducing Ínside' Out of Distribution
Teddy Lazebnik
31
1
0
05 Jul 2024
Towards Robust Training Datasets for Machine Learning with Ontologies: A Case Study for Emergency Road Vehicle Detection
Lynn Vonderhaar
Timothy Elvira
T. Procko
Omar Ochoa
26
0
0
21 Jun 2024
Automatic generation of insights from workers' actions in industrial workflows with explainable Machine Learning
Francisco de Arriba-Pérez
Silvia García-Méndez
Javier Otero-Mosquera
Francisco J. González Castaño
F. Gil-Castiñeira
14
0
0
18 Jun 2024
On GNN explanability with activation rules
Luca Veyrin-Forrer
Ataollah Kamal
Stefan Duffner
Marc Plantevit
C. Robardet
AI4CE
21
2
0
17 Jun 2024
Efficient Exploration of the Rashomon Set of Rule Set Models
Martino Ciaperoni
Han Xiao
A. Gionis
25
3
0
05 Jun 2024
Explainable automatic industrial carbon footprint estimation from bank transaction classification using natural language processing
Jaime González-González
Silvia García-Méndez
Francisco de Arriba-Pérez
Francisco J. González Castaño
Oscar Barba-Seara
28
8
0
23 May 2024
Flow AM: Generating Point Cloud Global Explanations by Latent Alignment
Hanxiao Tan
37
1
0
29 Apr 2024
Toward a Quantum Information System Cybersecurity Taxonomy and Testbed: Exploiting a Unique Opportunity for Early Impact
Benjamin Blakely
Joaquin Chung
Alec Poczatek
Ryan Syed
Raj Kettimuthu
16
1
0
18 Apr 2024
Accurate estimation of feature importance faithfulness for tree models
Mateusz Gajewski
Adam Karczmarz
Mateusz Rapicki
Piotr Sankowski
37
0
0
04 Apr 2024
What is the focus of XAI in UI design? Prioritizing UI design principles for enhancing XAI user experience
Dian Lei
Yao He
Jianyou Zeng
28
1
0
21 Feb 2024
Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review
Anton Kuznietsov
Balint Gyevnar
Cheng Wang
Steven Peters
Stefano V. Albrecht
XAI
28
26
0
08 Feb 2024
A Systematic Literature Review on Explainability for Machine/Deep Learning-based Software Engineering Research
Sicong Cao
Xiaobing Sun
Ratnadira Widyasari
David Lo
Xiaoxue Wu
...
Jiale Zhang
Bin Li
Wei Liu
Di Wu
Yixin Chen
28
6
0
26 Jan 2024
Generating Likely Counterfactuals Using Sum-Product Networks
Jiri Nemecek
Tomás Pevný
Jakub Marecek
TPM
76
0
0
25 Jan 2024
A novel post-hoc explanation comparison metric and applications
Shreyan Mitra
Leilani H. Gilpin
FAtt
31
0
0
17 Nov 2023
Scene Text Recognition Models Explainability Using Local Features
M. Ty
Rowel Atienza
28
1
0
14 Oct 2023
Interpretability is not Explainability: New Quantitative XAI Approach with a focus on Recommender Systems in Education
Riccardo Porcedda
XAI
28
0
0
18 Sep 2023
SurvBeX: An explanation method of the machine learning survival models based on the Beran estimator
Lev V. Utkin
Danila Eremenko
A. Konstantinov
30
4
0
07 Aug 2023
Beyond Single-Feature Importance with ICECREAM
M.-J. Oesterle
Patrick Blobaum
Atalanti A. Mastakouri
Elke Kirschbaum
CML
32
1
0
19 Jul 2023
A Vulnerability of Attribution Methods Using Pre-Softmax Scores
Miguel A. Lerma
Mirtha Lucas
FAtt
19
0
0
06 Jul 2023
BELLA: Black box model Explanations by Local Linear Approximations
N. Radulovic
Albert Bifet
Fabian M. Suchanek
FAtt
34
1
0
18 May 2023
Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK
L. Nannini
Agathe Balayn
A. Smith
16
37
0
20 Apr 2023
Learning the Finer Things: Bayesian Structure Learning at the Instantiation Level
Chase Yakaboski
E. Santos
19
2
0
08 Mar 2023
Less is More: The Influence of Pruning on the Explainability of CNNs
David Weber
F. Merkle
Pascal Schöttle
Stephan Schlögl
Martin Nocker
FAtt
29
1
0
17 Feb 2023
A Survey on Event Prediction Methods from a Systems Perspective: Bringing Together Disparate Research Areas
Janik-Vasily Benzin
S. Rinderle-Ma
AI4TS
38
2
0
08 Feb 2023
Weakly Supervised Learning Significantly Reduces the Number of Labels Required for Intracranial Hemorrhage Detection on Head CT
Jacopo Teneggi
P. Yi
Jeremias Sulam
25
3
0
29 Nov 2022
Deep Fake Detection, Deterrence and Response: Challenges and Opportunities
Amin Azmoodeh
Ali Dehghantanha
29
2
0
26 Nov 2022
Mixture of Decision Trees for Interpretable Machine Learning
Simeon Brüggenjürgen
Nina Schaaf
P. Kerschke
Marco F. Huber
MoE
9
0
0
26 Nov 2022
Beyond Mahalanobis-Based Scores for Textual OOD Detection
Pierre Colombo
Eduardo Dadalto Camara Gomes
Guillaume Staerman
Nathan Noiry
Pablo Piantanida
OODD
41
5
0
24 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
Redefining Counterfactual Explanations for Reinforcement Learning: Overview, Challenges and Opportunities
Jasmina Gajcin
Ivana Dusparic
CML
OffRL
35
8
0
21 Oct 2022
FEAMOE: Fair, Explainable and Adaptive Mixture of Experts
Shubham Sharma
Jette Henderson
Joydeep Ghosh
FedML
MoE
28
5
0
10 Oct 2022
Interpreting the Mechanism of Synergism for Drug Combinations Using Attention-Based Hierarchical Graph Pooling
Zehao Dong
Heming Zhang
Yixin Chen
Philip R. O. Payne
Fuhai Li
GNN
40
16
0
19 Sep 2022
Slimmable Quantum Federated Learning
Won Joon Yun
Jae Pyoung Kim
Soyi Jung
Jihong Park
M. Bennis
Joongheon Kim
15
27
0
20 Jul 2022
Implementing Reinforcement Learning Datacenter Congestion Control in NVIDIA NICs
Benjamin Fuhrer
Yuval Shpigelman
Chen Tessler
Shie Mannor
Gal Chechik
E. Zahavi
Gal Dalal
25
4
0
05 Jul 2022
Attention Flows for General Transformers
Niklas Metzger
Christopher Hahn
Julian Siber
Frederik Schmitt
Bernd Finkbeiner
34
0
0
30 May 2022
The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations
Aparna Balagopalan
Haoran Zhang
Kimia Hamidieh
Thomas Hartvigsen
Frank Rudzicz
Marzyeh Ghassemi
38
77
0
06 May 2022
Mapping the landscape of histomorphological cancer phenotypes using self-supervised learning on unlabeled, unannotated pathology slides
A. Quiros
N. Coudray
A. Yeaton
Xinyu Yang
Bojing Liu
...
H. Pass
A. Moreira
J. L. Quesne
A. Tsirigos
Ke-Fei Yuan
SSL
13
5
0
04 May 2022
Explainability in reinforcement learning: perspective and position
Agneza Krajna
Mario Brčič
T. Lipić
Juraj Dončević
28
27
0
22 Mar 2022
ReCCoVER: Detecting Causal Confusion for Explainable Reinforcement Learning
Jasmina Gajcin
Ivana Dusparic
CML
43
6
0
21 Mar 2022
How to Learn from Risk: Explicit Risk-Utility Reinforcement Learning for Efficient and Safe Driving Strategies
Lukas M. Schmidt
Sebastian Rietsch
Axel Plinge
Bjoern M. Eskofier
Christopher Mutschler
OffRL
22
5
0
16 Mar 2022
Explainability for identification of vulnerable groups in machine learning models
Inga Strümke
Marija Slavkovik
FaML
25
3
0
01 Mar 2022
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Alon Jacovi
Jasmijn Bastings
Sebastian Gehrmann
Yoav Goldberg
Katja Filippova
36
15
0
27 Jan 2022
1
2
Next