ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.09294
  4. Cited By
The Dangers of Post-hoc Interpretability: Unjustified Counterfactual
  Explanations

The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations

22 July 2019
Thibault Laugel
Marie-Jeanne Lesot
Christophe Marsala
X. Renard
Marcin Detyniecki
ArXiv (abs)PDFHTML

Papers citing "The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations"

50 / 102 papers shown
Title
Benchmarking Instance-Centric Counterfactual Algorithms for XAI: From
  White Box to Black Box
Benchmarking Instance-Centric Counterfactual Algorithms for XAI: From White Box to Black Box
Catarina Moreira
Yu-Liang Chou
Chih-Jou Hsieh
Chun Ouyang
Joaquim A. Jorge
João Pereira
CML
83
9
0
04 Mar 2022
Sensing accident-prone features in urban scenes for proactive driving
  and accident prevention
Sensing accident-prone features in urban scenes for proactive driving and accident prevention
Sumit Mishra
Praveenbalaji Rajendran
L. Vecchietti
Dongsoo Har
57
13
0
25 Feb 2022
On the Robustness of Sparse Counterfactual Explanations to Adverse
  Perturbations
On the Robustness of Sparse Counterfactual Explanations to Adverse Perturbations
M. Virgolin
Saverio Fracaros
CML
92
36
0
22 Jan 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
  Review on Evaluating Explainable AI
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELMXAI
178
422
0
20 Jan 2022
Verifying Controllers with Convolutional Neural Network-based
  Perception: A Case for Intelligible, Safe, and Precise Abstractions
Verifying Controllers with Convolutional Neural Network-based Perception: A Case for Intelligible, Safe, and Precise Abstractions
Chiao Hsieh
Keyur Joshi
Sasa Misailovic
Sayan Mitra
87
35
0
10 Nov 2021
Solving the Class Imbalance Problem Using a Counterfactual Method for
  Data Augmentation
Solving the Class Imbalance Problem Using a Counterfactual Method for Data Augmentation
M. Temraz
Markt. Keane
70
45
0
05 Nov 2021
Counterfactual Shapley Additive Explanations
Counterfactual Shapley Additive Explanations
Emanuele Albini
Jason Long
Danial Dervovic
Daniele Magazzeni
94
51
0
27 Oct 2021
CARE: Coherent Actionable Recourse based on Sound Counterfactual
  Explanations
CARE: Coherent Actionable Recourse based on Sound Counterfactual Explanations
P. Rasouli
Ingrid Chieh Yu
45
27
0
18 Aug 2021
On the Importance of Domain-specific Explanations in AI-based
  Cybersecurity Systems (Technical Report)
On the Importance of Domain-specific Explanations in AI-based Cybersecurity Systems (Technical Report)
José Paredes
J. C. Teze
Gerardo Simari
Maria Vanina Martinez
70
20
0
02 Aug 2021
CARLA: A Python Library to Benchmark Algorithmic Recourse and
  Counterfactual Explanation Algorithms
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Martin Pawelczyk
Sascha Bielawski
J. V. D. Heuvel
Tobias Richter
Gjergji Kasneci
CML
83
105
0
02 Aug 2021
Deep learning for temporal data representation in electronic health
  records: A systematic review of challenges and methodologies
Deep learning for temporal data representation in electronic health records: A systematic review of challenges and methodologies
F. Xie
Han Yuan
Yilin Ning
M. Ong
Mengling Feng
Wynne Hsu
B. Chakraborty
Nan Liu
97
89
0
21 Jul 2021
Uncertainty Estimation and Out-of-Distribution Detection for
  Counterfactual Explanations: Pitfalls and Solutions
Uncertainty Estimation and Out-of-Distribution Detection for Counterfactual Explanations: Pitfalls and Solutions
Eoin Delaney
Derek Greene
Mark T. Keane
74
26
0
20 Jul 2021
Personalized and Reliable Decision Sets: Enhancing Interpretability in
  Clinical Decision Support Systems
Personalized and Reliable Decision Sets: Enhancing Interpretability in Clinical Decision Support Systems
Francisco Valente
Simão Paredes
J. Henriques
17
1
0
15 Jul 2021
Knowledge-Grounded Self-Rationalization via Extractive and Natural
  Language Explanations
Knowledge-Grounded Self-Rationalization via Extractive and Natural Language Explanations
Bodhisattwa Prasad Majumder
Oana-Maria Camburu
Thomas Lukasiewicz
Julian McAuley
96
36
0
25 Jun 2021
Interpretable Machine Learning Classifiers for Brain Tumour Survival
  Prediction
Interpretable Machine Learning Classifiers for Brain Tumour Survival Prediction
C. Charlton
M. Poon
P. Brennan
Jacques D. Fleuriot
23
0
0
17 Jun 2021
Characterizing the risk of fairwashing
Characterizing the risk of fairwashing
Ulrich Aïvodji
Hiromi Arai
Sébastien Gambs
Satoshi Hara
92
28
0
14 Jun 2021
Partially Interpretable Estimators (PIE): Black-Box-Refined
  Interpretable Machine Learning
Partially Interpretable Estimators (PIE): Black-Box-Refined Interpretable Machine Learning
Tong Wang
Jingyi Yang
Yunyi Li
Boxiang Wang
FAtt
60
5
0
06 May 2021
Twin Systems for DeepCBR: A Menagerie of Deep Learning and Case-Based
  Reasoning Pairings for Explanation and Data Augmentation
Twin Systems for DeepCBR: A Menagerie of Deep Learning and Case-Based Reasoning Pairings for Explanation and Data Augmentation
Markt. Keane
Eoin M. Kenny
M. Temraz
Derek Greene
Barry Smyth
55
5
0
29 Apr 2021
Individual Explanations in Machine Learning Models: A Case Study on
  Poverty Estimation
Individual Explanations in Machine Learning Models: A Case Study on Poverty Estimation
Alfredo Carrillo
Luis F. Cantú
Luis Tejerina
Alejandro Noriega
16
3
0
09 Apr 2021
Individual Explanations in Machine Learning Models: A Survey for
  Practitioners
Individual Explanations in Machine Learning Models: A Survey for Practitioners
Alfredo Carrillo
Luis F. Cantú
Alejandro Noriega
FaML
31
15
0
09 Apr 2021
Modeling Users and Online Communities for Abuse Detection: A Position on
  Ethics and Explainability
Modeling Users and Online Communities for Abuse Detection: A Position on Ethics and Explainability
Pushkar Mishra
H. Yannakoudakis
Ekaterina Shutova
60
7
0
31 Mar 2021
Interpretable Machine Learning: Fundamental Principles and 10 Grand
  Challenges
Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges
Cynthia Rudin
Chaofan Chen
Zhi Chen
Haiyang Huang
Lesia Semenova
Chudi Zhong
FaMLAI4CELRM
240
677
0
20 Mar 2021
Generating Interpretable Counterfactual Explanations By Implicit
  Minimisation of Epistemic and Aleatoric Uncertainties
Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties
Lisa Schut
Oscar Key
R. McGrath
Luca Costabello
Bogdan Sacaleanu
Medb Corcoran
Y. Gal
CML
105
48
0
16 Mar 2021
Counterfactuals and Causability in Explainable Artificial Intelligence:
  Theory, Algorithms, and Applications
Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications
Yu-Liang Chou
Catarina Moreira
P. Bruza
Chun Ouyang
Joaquim A. Jorge
CML
165
179
0
07 Mar 2021
Human-Understandable Decision Making for Visual Recognition
Human-Understandable Decision Making for Visual Recognition
Xiaowei Zhou
Jie Yin
Ivor Tsang
Chen Wang
FAttHAI
54
1
0
05 Mar 2021
If Only We Had Better Counterfactual Explanations: Five Key Deficits to
  Rectify in the Evaluation of Counterfactual XAI Techniques
If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques
Mark T. Keane
Eoin M. Kenny
Eoin Delaney
Barry Smyth
CML
94
146
0
26 Feb 2021
A Few Good Counterfactuals: Generating Interpretable, Plausible and
  Diverse Counterfactual Explanations
A Few Good Counterfactuals: Generating Interpretable, Plausible and Diverse Counterfactual Explanations
Barry Smyth
Mark T. Keane
CML
96
27
0
22 Jan 2021
Unbox the Blackbox: Predict and Interpret YouTube Viewership Using Deep
  Learning
Unbox the Blackbox: Predict and Interpret YouTube Viewership Using Deep Learning
Jiaheng Xie
Xinyu Liu
HAI
55
10
0
21 Dec 2020
Sparse encoding for more-interpretable feature-selecting representations
  in probabilistic matrix factorization
Sparse encoding for more-interpretable feature-selecting representations in probabilistic matrix factorization
Joshua C. Chang
P. Fletcher
Ju Han
Ted L.Chang
Shashaank Vattikuti
Bart Desmet
Ayah Zirikly
Carson C. Chow
37
4
0
08 Dec 2020
Shapley values for cluster importance: How clusters of the training data
  affect a prediction
Shapley values for cluster importance: How clusters of the training data affect a prediction
A. Brandsæter
I. Glad
TDIFAtt
61
6
0
07 Dec 2020
Challenging common interpretability assumptions in feature attribution
  explanations
Challenging common interpretability assumptions in feature attribution explanations
Jonathan Dinu
Jeffrey P. Bigham
J. Z. K. Unaffiliated
75
14
0
04 Dec 2020
Neural Prototype Trees for Interpretable Fine-grained Image Recognition
Neural Prototype Trees for Interpretable Fine-grained Image Recognition
Meike Nauta
Ron van Bree
C. Seifert
190
269
0
03 Dec 2020
Counterfactual Explanations and Algorithmic Recourses for Machine
  Learning: A Review
Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review
Sahil Verma
Varich Boonsanong
Minh Hoang
Keegan E. Hines
John P. Dickerson
Chirag Shah
CML
176
176
0
20 Oct 2020
Interpretable Machine Learning -- A Brief History, State-of-the-Art and
  Challenges
Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
AI4TSAI4CE
111
405
0
19 Oct 2020
A survey of algorithmic recourse: definitions, formulations, solutions,
  and prospects
A survey of algorithmic recourse: definitions, formulations, solutions, and prospects
Amir-Hossein Karimi
Gilles Barthe
Bernhard Schölkopf
Isabel Valera
FaML
70
172
0
08 Oct 2020
Instance-based Counterfactual Explanations for Time Series
  Classification
Instance-based Counterfactual Explanations for Time Series Classification
Eoin Delaney
Derek Greene
Mark T. Keane
CMLAI4TS
53
91
0
28 Sep 2020
The Intriguing Relation Between Counterfactual Explanations and
  Adversarial Examples
The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples
Timo Freiesleben
GAN
102
64
0
11 Sep 2020
On Generating Plausible Counterfactual and Semi-Factual Explanations for
  Deep Learning
On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning
Eoin M. Kenny
Mark T. Keane
70
102
0
10 Sep 2020
Model extraction from counterfactual explanations
Model extraction from counterfactual explanations
Ulrich Aïvodji
Alexandre Bolot
Sébastien Gambs
MIACVMLAU
87
52
0
03 Sep 2020
Drug discovery with explainable artificial intelligence
Drug discovery with explainable artificial intelligence
José Jiménez-Luna
F. Grisoni
G. Schneider
195
643
0
01 Jul 2020
On Counterfactual Explanations under Predictive Multiplicity
On Counterfactual Explanations under Predictive Multiplicity
Martin Pawelczyk
Klaus Broelemann
Gjergji Kasneci
144
87
0
23 Jun 2020
Explainable Artificial Intelligence: a Systematic Review
Explainable Artificial Intelligence: a Systematic Review
Giulia Vilone
Luca Longo
XAI
110
271
0
29 May 2020
Good Counterfactuals and Where to Find Them: A Case-Based Technique for
  Generating Counterfactuals for Explainable AI (XAI)
Good Counterfactuals and Where to Find Them: A Case-Based Technique for Generating Counterfactuals for Explainable AI (XAI)
Mark T. Keane
Barry Smyth
CML
76
146
0
26 May 2020
Don't Explain without Verifying Veracity: An Evaluation of Explainable
  AI with Video Activity Recognition
Don't Explain without Verifying Veracity: An Evaluation of Explainable AI with Video Activity Recognition
Mahsan Nourani
Chiradeep Roy
Tahrima Rahman
Eric D. Ragan
Nicholas Ruozzi
Vibhav Gogate
AAML
62
18
0
05 May 2020
An Efficient Explorative Sampling Considering the Generative Boundaries
  of Deep Generative Neural Networks
An Efficient Explorative Sampling Considering the Generative Boundaries of Deep Generative Neural Networks
Giyoung Jeon
Haedong Jeong
Jaesik Choi
34
13
0
12 Dec 2019
Explainability Fact Sheets: A Framework for Systematic Assessment of
  Explainable Approaches
Explainability Fact Sheets: A Framework for Systematic Assessment of Explainable Approaches
Kacper Sokol
Peter A. Flach
XAI
119
304
0
11 Dec 2019
Identifying the Most Explainable Classifier
Identifying the Most Explainable Classifier
Brett Mullins
FAtt
42
1
0
18 Oct 2019
Model-Agnostic Linear Competitors -- When Interpretable Models Compete
  and Collaborate with Black-Box Models
Model-Agnostic Linear Competitors -- When Interpretable Models Compete and Collaborate with Black-Box Models
Hassan Rafique
Tong Wang
Qihang Lin
41
4
0
23 Sep 2019
Predictive Multiplicity in Classification
Predictive Multiplicity in Classification
Charles Marx
Flavio du Pin Calmon
Berk Ustun
136
147
0
14 Sep 2019
Learning Fair Rule Lists
Learning Fair Rule Lists
Ulrich Aïvodji
Julien Ferry
Sébastien Gambs
Marie-José Huguet
Mohamed Siala
FaML
55
11
0
09 Sep 2019
Previous
123
Next