Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2102.13076
Cited By
Benchmarking and Survey of Explanation Methods for Black Box Models
25 February 2021
F. Bodria
F. Giannotti
Riccardo Guidotti
Francesca Naretto
D. Pedreschi
S. Rinzivillo
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Benchmarking and Survey of Explanation Methods for Black Box Models"
25 / 25 papers shown
Title
Privacy Risks and Preservation Methods in Explainable Artificial Intelligence: A Scoping Review
Sonal Allana
Mohan Kankanhalli
Rozita Dara
25
0
0
05 May 2025
Exploring the Impact of Explainable AI and Cognitive Capabilities on Users' Decisions
Federico Maria Cau
Lucio Davide Spano
24
0
0
02 May 2025
Explainable AI in Time-Sensitive Scenarios: Prefetched Offline Explanation Model
Fabio Michele Russo
C. Metta
Anna Monreale
S. Rinzivillo
Fabio Pinelli
60
0
0
06 Mar 2025
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Célia Wafa Ayad
Thomas Bonnier
Benjamin Bosch
Sonali Parbhoo
Jesse Read
FAtt
XAI
92
0
0
11 Feb 2025
Coherent Local Explanations for Mathematical Optimization
Daan Otto
Jannis Kurtz
S. Ilker Birbil
56
0
0
07 Feb 2025
Explainable Emotion Decoding for Human and Computer Vision
Alessio Borriero
Martina Milazzo
M. Diano
Davide Orsenigo
Maria Chiara Villa
Chiara Di Fazio
Marco Tamietto
Alan Perotti
23
0
0
01 Aug 2024
T-Explainer: A Model-Agnostic Explainability Framework Based on Gradients
Evandro S. Ortigossa
Fábio F. Dias
Brian Barr
Claudio T. Silva
L. G. Nonato
FAtt
47
2
0
25 Apr 2024
Generating Likely Counterfactuals Using Sum-Product Networks
Jiri Nemecek
Tomás Pevný
Jakub Marecek
TPM
65
0
0
25 Jan 2024
Distributional Counterfactual Explanations With Optimal Transport
Lei You
Lele Cao
Mattias Nilsson
Bo Zhao
Lei Lei
OT
OffRL
20
1
0
23 Jan 2024
Prototypical Self-Explainable Models Without Re-training
Srishti Gautam
Ahcène Boubekki
Marina M.-C. Höhne
Michael C. Kampffmeyer
26
2
0
13 Dec 2023
Glocal Explanations of Expected Goal Models in Soccer
Mustafa Cavus
Adrian Stando
P. Biecek
32
4
0
29 Aug 2023
SurvBeX: An explanation method of the machine learning survival models based on the Beran estimator
Lev V. Utkin
Danila Eremenko
A. Konstantinov
30
4
0
07 Aug 2023
Feature construction using explanations of individual predictions
Boštjan Vouk
Matej Guid
Marko Robnik-Šikonja
FAtt
14
10
0
23 Jan 2023
Bort: Towards Explainable Neural Networks with Bounded Orthogonal Constraint
Borui Zhang
Wenzhao Zheng
Jie Zhou
Jiwen Lu
AAML
23
7
0
18 Dec 2022
A Survey on Graph Counterfactual Explanations: Definitions, Methods, Evaluation, and Research Challenges
Mario Alfonso Prado-Romero
Bardh Prenkaj
Giovanni Stilo
F. Giannotti
CML
27
30
0
21 Oct 2022
A model-agnostic approach for generating Saliency Maps to explain inferred decisions of Deep Learning Models
S. Karatsiolis
A. Kamilaris
FAtt
13
0
0
19 Sep 2022
Towards Explainable Evaluation Metrics for Natural Language Generation
Christoph Leiter
Piyawat Lertvittayakumjorn
M. Fomicheva
Wei-Ye Zhao
Yang Gao
Steffen Eger
AAML
ELM
19
20
0
21 Mar 2022
Interpreting Deep Learning Models in Natural Language Processing: A Review
Xiaofei Sun
Diyi Yang
Xiaoya Li
Tianwei Zhang
Yuxian Meng
Han Qiu
Guoyin Wang
Eduard H. Hovy
Jiwei Li
10
44
0
20 Oct 2021
Trustworthy AI: From Principles to Practices
Bo-wen Li
Peng Qi
Bo Liu
Shuai Di
Jingen Liu
Jiquan Pei
Jinfeng Yi
Bowen Zhou
117
354
0
04 Oct 2021
Conclusive Local Interpretation Rules for Random Forests
Ioannis Mollas
Nick Bassiliades
Grigorios Tsoumakas
FaML
FAtt
16
17
0
13 Apr 2021
Explainability in Graph Neural Networks: A Taxonomic Survey
Hao Yuan
Haiyang Yu
Shurui Gui
Shuiwang Ji
162
589
0
31 Dec 2020
BRPO: Batch Residual Policy Optimization
Kentaro Kanamori
Yinlam Chow
Takuya Takagi
Hiroki Arimura
Honglak Lee
Ken Kobayashi
Craig Boutilier
OffRL
131
46
0
08 Feb 2020
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
120
297
0
17 Oct 2019
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
225
3,672
0
28 Feb 2017
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
185
2,082
0
24 Oct 2016
1