Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2008.05030
Cited By
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
11 August 2020
Dylan Slack
Sophie Hilgard
Sameer Singh
Himabindu Lakkaraju
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Reliable Post hoc Explanations: Modeling Uncertainty in Explainability"
25 / 25 papers shown
Title
Display Content, Display Methods and Evaluation Methods of the HCI in Explainable Recommender Systems: A Survey
Weiqing Li
Yue Xu
Yuefeng Li
Yinghui Huang
23
0
0
14 May 2025
DiCE-Extended: A Robust Approach to Counterfactual Explanations in Machine Learning
Volkan Bakir
Polat Goktas
Sureyya Akyuz
52
0
0
26 Apr 2025
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Célia Wafa Ayad
Thomas Bonnier
Benjamin Bosch
Sonali Parbhoo
Jesse Read
FAtt
XAI
103
0
0
11 Feb 2025
Counterfactuals As a Means for Evaluating Faithfulness of Attribution Methods in Autoregressive Language Models
Sepehr Kamahi
Yadollah Yaghoobzadeh
51
0
0
21 Aug 2024
Efficient and Accurate Explanation Estimation with Distribution Compression
Hubert Baniecki
Giuseppe Casalicchio
Bernd Bischl
Przemyslaw Biecek
FAtt
46
3
0
26 Jun 2024
Evaluating Saliency Explanations in NLP by Crowdsourcing
Xiaotian Lu
Jiyi Li
Zhen Wan
Xiaofeng Lin
Koh Takeuchi
Hisashi Kashima
XAI
FAtt
LRM
27
1
0
17 May 2024
Post-hoc and manifold explanations analysis of facial expression data based on deep learning
Yang Xiao
29
0
0
29 Apr 2024
Segmentation, Classification and Interpretation of Breast Cancer Medical Images using Human-in-the-Loop Machine Learning
David Vázquez-Lema
E. Mosqueira-Rey
Elena Hernández-Pereira
Carlos Fernández-Lozano
Fernando Seara-Romera
Jorge Pombo-Otero
LM&MA
34
1
0
29 Mar 2024
Uncertainty Quantification for Gradient-based Explanations in Neural Networks
Mihir Mulye
Matias Valdenegro-Toro
UQCV
FAtt
33
0
0
25 Mar 2024
QUCE: The Minimisation and Quantification of Path-Based Uncertainty for Generative Counterfactual Explanations
J. Duell
M. Seisenberger
Hsuan-Wei Fu
Xiuyi Fan
UQCV
BDL
37
1
0
27 Feb 2024
Explaining Probabilistic Models with Distributional Values
Luca Franceschi
Michele Donini
Cédric Archambeau
Matthias Seeger
FAtt
37
2
0
15 Feb 2024
The Duet of Representations and How Explanations Exacerbate It
Charles Wan
Rodrigo Belo
Leid Zejnilovic
Susana Lavado
CML
FAtt
16
1
0
13 Feb 2024
Generating Explanations to Understand and Repair Embedding-based Entity Alignment
Xiaobin Tian
Zequn Sun
Wei Hu
26
6
0
08 Dec 2023
Generative Perturbation Analysis for Probabilistic Black-Box Anomaly Attribution
T. Idé
Naoki Abe
38
4
0
09 Aug 2023
Confident Feature Ranking
Bitya Neuhof
Y. Benjamini
FAtt
26
3
0
28 Jul 2023
Communicating Uncertainty in Machine Learning Explanations: A Visualization Analytics Approach for Predictive Process Monitoring
Nijat Mehdiyev
Maxim Majlatow
Peter Fettke
27
2
0
12 Apr 2023
Ignore, Trust, or Negotiate: Understanding Clinician Acceptance of AI-Based Treatment Recommendations in Health Care
Venkatesh Sivaraman
L. Bukowski
J. Levin
J. Kahn
Adam Perer
29
81
0
31 Jan 2023
REVEL Framework to measure Local Linear Explanations for black-box models: Deep Learning Image Classification case of study
Iván Sevillano-García
Julián Luengo-Martín
Francisco Herrera
XAI
FAtt
21
7
0
11 Nov 2022
Neural Basis Models for Interpretability
Filip Radenovic
Abhimanyu Dubey
D. Mahajan
FAtt
24
46
0
27 May 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
23
41
0
15 Feb 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
177
186
0
03 Feb 2022
Metrics for saliency map evaluation of deep learning explanation methods
T. Gomez
Thomas Fréour
Harold Mouchère
XAI
FAtt
69
41
0
31 Jan 2022
Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts
Sebastian Bordt
Michèle Finck
Eric Raidl
U. V. Luxburg
AILaw
36
77
0
25 Jan 2022
What will it take to generate fairness-preserving explanations?
Jessica Dai
Sohini Upadhyay
Stephen H. Bach
Himabindu Lakkaraju
FAtt
FaML
13
14
0
24 Jun 2021
Rational Shapley Values
David S. Watson
23
20
0
18 Jun 2021
1