ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.09294
  4. Cited By
The Dangers of Post-hoc Interpretability: Unjustified Counterfactual
  Explanations

The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations

22 July 2019
Thibault Laugel
Marie-Jeanne Lesot
Christophe Marsala
X. Renard
Marcin Detyniecki
ArXiv (abs)PDFHTML

Papers citing "The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations"

50 / 102 papers shown
Title
When Counterfactual Reasoning Fails: Chaos and Real-World Complexity
When Counterfactual Reasoning Fails: Chaos and Real-World Complexity
Yahya Aalaila
Gerrit Großmann
Sumantrak Mukherjee
Jonas Wahl
Sebastian Vollmer
CMLLRM
140
0
0
31 Mar 2025
Re-Imagining Multimodal Instruction Tuning: A Representation View
Re-Imagining Multimodal Instruction Tuning: A Representation View
Yiyang Liu
James Liang
Ruixiang Tang
Yugyung Lee
Majid Rabbani
...
Raghuveer M. Rao
Lifu Huang
Dongfang Liu
Qifan Wang
Cheng Han
407
0
0
02 Mar 2025
Controlled Model Debiasing through Minimal and Interpretable Updates
Controlled Model Debiasing through Minimal and Interpretable Updates
Federico Di Gennaro
Thibault Laugel
Vincent Grari
Marcin Detyniecki
FaML
123
0
0
28 Feb 2025
Models That Are Interpretable But Not Transparent
Models That Are Interpretable But Not Transparent
Chudi Zhong
Panyu Chen
Cynthia Rudin
AAML
92
0
0
26 Feb 2025
Concept Layers: Enhancing Interpretability and Intervenability via LLM Conceptualization
Concept Layers: Enhancing Interpretability and Intervenability via LLM Conceptualization
Or Raphael Bidusa
Shaul Markovitch
154
0
0
20 Feb 2025
Interpretable Image Classification with Adaptive Prototype-based Vision
  Transformers
Interpretable Image Classification with Adaptive Prototype-based Vision Transformers
Chiyu Ma
J. Donnelly
Wenjun Liu
Soroush Vosoughi
Cynthia Rudin
Chaofan Chen
ViT
74
8
0
28 Oct 2024
Recent advances in interpretable machine learning using structure-based
  protein representations
Recent advances in interpretable machine learning using structure-based protein representations
L. Vecchietti
Minji Lee
Begench Hangeldiyev
Hyunkyu Jung
Hahnbeom Park
Tae-Kyun Kim
Meeyoung Cha
Ho Min Kim
AI4CE
117
1
0
26 Sep 2024
M$^2$PT: Multimodal Prompt Tuning for Zero-shot Instruction Learning
M2^22PT: Multimodal Prompt Tuning for Zero-shot Instruction Learning
Taowen Wang
Yiyang Liu
James Liang
Junhan Zhao
Yiming Cui
...
Zenglin Xu
Cheng Han
Lifu Huang
Qifan Wang
Dongfang Liu
MLLMVLMLRM
95
19
0
24 Sep 2024
CHILLI: A data context-aware perturbation method for XAI
CHILLI: A data context-aware perturbation method for XAI
Saif Anwar
Nathan Griffiths
A. Bhalerao
T. Popham
71
0
0
10 Jul 2024
Towards Understanding Sensitive and Decisive Patterns in Explainable AI:
  A Case Study of Model Interpretation in Geometric Deep Learning
Towards Understanding Sensitive and Decisive Patterns in Explainable AI: A Case Study of Model Interpretation in Geometric Deep Learning
Jiajun Zhu
Siqi Miao
Rex Ying
Pan Li
77
2
0
30 Jun 2024
Neural Concept Binder
Neural Concept Binder
Wolfgang Stammer
Antonia Wüst
David Steinmann
Kristian Kersting
OCL
94
7
0
14 Jun 2024
Mitigating Text Toxicity with Counterfactual Generation
Mitigating Text Toxicity with Counterfactual Generation
Milan Bhan
Jean-Noel Vittaut
Nina Achache
Victor Legrand
Nicolas Chesneau
A. Blangero
Juliette Murris
Marie-Jeanne Lesot
MedIm
213
0
0
16 May 2024
Interpretability in Symbolic Regression: a benchmark of Explanatory
  Methods using the Feynman data set
Interpretability in Symbolic Regression: a benchmark of Explanatory Methods using the Feynman data set
Guilherme Seidyo Imai Aldeia
Fabrício Olivetti de França
89
10
0
08 Apr 2024
Neural Clustering based Visual Representation Learning
Neural Clustering based Visual Representation Learning
Guikun Chen
Xia Li
Yi Yang
Wenguan Wang
SSL
97
9
0
26 Mar 2024
Towards Non-Adversarial Algorithmic Recourse
Towards Non-Adversarial Algorithmic Recourse
Tobias Leemann
Martin Pawelczyk
Bardh Prenkaj
Gjergji Kasneci
AAML
74
2
0
15 Mar 2024
Unmasking Dementia Detection by Masking Input Gradients: A JSM Approach
  to Model Interpretability and Precision
Unmasking Dementia Detection by Masking Input Gradients: A JSM Approach to Model Interpretability and Precision
Yasmine Mustafa
Tie-Mei Luo
AAML
64
2
0
25 Feb 2024
Understanding Disparities in Post Hoc Machine Learning Explanation
Understanding Disparities in Post Hoc Machine Learning Explanation
Vishwali Mhasawade
Salman Rahman
Zoe Haskell-Craig
R. Chunara
64
5
0
25 Jan 2024
Facing the Elephant in the Room: Visual Prompt Tuning or Full
  Finetuning?
Facing the Elephant in the Room: Visual Prompt Tuning or Full Finetuning?
Cheng Han
Qifan Wang
Yiming Cui
Wenguan Wang
Lifu Huang
Siyuan Qi
Dongfang Liu
VLM
145
22
0
23 Jan 2024
MICA: Towards Explainable Skin Lesion Diagnosis via Multi-Level
  Image-Concept Alignment
MICA: Towards Explainable Skin Lesion Diagnosis via Multi-Level Image-Concept Alignment
Yequan Bie
Luyang Luo
Hao Chen
79
15
0
16 Jan 2024
On the Relationship Between Interpretability and Explainability in
  Machine Learning
On the Relationship Between Interpretability and Explainability in Machine Learning
Benjamin Leblanc
Pascal Germain
FaML
86
0
0
20 Nov 2023
The Utility of "Even if..." Semifactual Explanation to Optimise Positive
  Outcomes
The Utility of "Even if..." Semifactual Explanation to Optimise Positive Outcomes
Eoin M. Kenny
Weipeng Huang
70
9
0
29 Oct 2023
This Looks Like Those: Illuminating Prototypical Concepts Using Multiple
  Visualizations
This Looks Like Those: Illuminating Prototypical Concepts Using Multiple Visualizations
Chiyu Ma
Brandon Zhao
Chaofan Chen
Cynthia Rudin
80
29
0
28 Oct 2023
Towards Faithful Neural Network Intrinsic Interpretation with Shapley
  Additive Self-Attribution
Towards Faithful Neural Network Intrinsic Interpretation with Shapley Additive Self-Attribution
Ying Sun
Hengshu Zhu
Huixia Xiong
TDIFAttMILM
98
1
0
27 Sep 2023
On the Connection between Game-Theoretic Feature Attributions and
  Counterfactual Explanations
On the Connection between Game-Theoretic Feature Attributions and Counterfactual Explanations
Emanuele Albini
Shubham Sharma
Saumitra Mishra
Danial Dervovic
Daniele Magazzeni
FAtt
72
2
0
13 Jul 2023
Topological Interpretability for Deep-Learning
Topological Interpretability for Deep-Learning
Adam Spannaus
Heidi A. Hanson
Lynne Penberthy
Georgia D. Tourassi
17
2
0
15 May 2023
Achieving Diversity in Counterfactual Explanations: a Review and
  Discussion
Achieving Diversity in Counterfactual Explanations: a Review and Discussion
Thibault Laugel
Adulam Jeyasothy
Marie-Jeanne Lesot
Christophe Marsala
Marcin Detyniecki
CML
57
13
0
10 May 2023
TIGTEC : Token Importance Guided TExt Counterfactuals
TIGTEC : Token Importance Guided TExt Counterfactuals
Milan Bhan
Jean-Noel Vittaut
Nicolas Chesneau
Marie-Jeanne Lesot
94
9
0
24 Apr 2023
Learning Bottleneck Concepts in Image Classification
Learning Bottleneck Concepts in Image Classification
Bowen Wang
Liangzhi Li
Yuta Nakashima
Hajime Nagahara
SSL
103
46
0
20 Apr 2023
Interpretable (not just posthoc-explainable) heterogeneous survivor
  bias-corrected treatment effects for assignment of postdischarge
  interventions to prevent readmissions
Interpretable (not just posthoc-explainable) heterogeneous survivor bias-corrected treatment effects for assignment of postdischarge interventions to prevent readmissions
Hongjing Xia
Joshua C. Chang
S. Nowak
Sonya Mahajan
R. Mahajan
Ted L. Chang
Carson C. Chow
70
1
0
19 Apr 2023
An Interpretable Loan Credit Evaluation Method Based on Rule
  Representation Learner
An Interpretable Loan Credit Evaluation Method Based on Rule Representation Learner
Zi-yu Chen
Xiaomeng Wang
Yuanjiang Huang
Tao Jia
69
1
0
03 Apr 2023
The Contextual Lasso: Sparse Linear Models via Deep Neural Networks
The Contextual Lasso: Sparse Linear Models via Deep Neural Networks
Ryan Thompson
Amir Dezfouli
Robert Kohn
89
6
0
02 Feb 2023
Even if Explanations: Prior Work, Desiderata & Benchmarks for
  Semi-Factual XAI
Even if Explanations: Prior Work, Desiderata & Benchmarks for Semi-Factual XAI
Saugat Aryal
Markt. Keane
84
22
0
27 Jan 2023
VCNet: A self-explaining model for realistic counterfactual generation
VCNet: A self-explaining model for realistic counterfactual generation
Victor Guyomard
Franccoise Fessant
Thomas Guyet
Tassadit Bouadi
Alexandre Termier
BDLOODCML
71
25
0
21 Dec 2022
Counterfactual Explanations for Misclassified Images: How Human and
  Machine Explanations Differ
Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ
Eoin Delaney
A. Pakrashi
Derek Greene
Markt. Keane
79
17
0
16 Dec 2022
Causality-Aware Local Interpretable Model-Agnostic Explanations
Causality-Aware Local Interpretable Model-Agnostic Explanations
Martina Cinquini
Riccardo Guidotti
CML
80
1
0
10 Dec 2022
Learning to Select Prototypical Parts for Interpretable Sequential Data
  Modeling
Learning to Select Prototypical Parts for Interpretable Sequential Data Modeling
Yifei Zhang
Nengneng Gao
Cunqing Ma
62
6
0
07 Dec 2022
Mixture of Decision Trees for Interpretable Machine Learning
Mixture of Decision Trees for Interpretable Machine Learning
Simeon Brüggenjürgen
Nina Schaaf
P. Kerschke
Marco F. Huber
MoE
57
0
0
26 Nov 2022
Decomposing Counterfactual Explanations for Consequential Decision
  Making
Decomposing Counterfactual Explanations for Consequential Decision Making
Martin Pawelczyk
Lea Tiyavorabun
Gjergji Kasneci
CML
44
1
0
03 Nov 2022
Interpretable Geometric Deep Learning via Learnable Randomness Injection
Interpretable Geometric Deep Learning via Learnable Randomness Injection
Siqi Miao
Yunan Luo
Miaoyuan Liu
Pan Li
62
25
0
30 Oct 2022
Improvement-Focused Causal Recourse (ICR)
Improvement-Focused Causal Recourse (ICR)
Gunnar Konig
Timo Freiesleben
Moritz Grosse-Wentrup
CML
76
16
0
27 Oct 2022
The privacy issue of counterfactual explanations: explanation linkage
  attacks
The privacy issue of counterfactual explanations: explanation linkage attacks
S. Goethals
Kenneth Sörensen
David Martens
55
28
0
21 Oct 2022
Interpretable Deep Tracking
Interpretable Deep Tracking
Benjamin Thérien
Krzysztof Czarnecki
84
0
0
03 Oct 2022
Visual Recognition with Deep Nearest Centroids
Visual Recognition with Deep Nearest Centroids
Wenguan Wang
Cheng Han
Tianfei Zhou
Dongfang Liu
127
94
0
15 Sep 2022
Interpretable (not just posthoc-explainable) medical claims modeling for
  discharge placement to prevent avoidable all-cause readmissions or death
Interpretable (not just posthoc-explainable) medical claims modeling for discharge placement to prevent avoidable all-cause readmissions or death
Joshua C. Chang
Ted L. Chang
Carson C. Chow
R. Mahajan
Sonya Mahajan
Joe Maisog
Shashaank Vattikuti
Hongjing Xia
FAttOOD
76
0
0
28 Aug 2022
Equivariant and Invariant Grounding for Video Question Answering
Equivariant and Invariant Grounding for Video Question Answering
Yicong Li
Xiang Wang
Junbin Xiao
Tat-Seng Chua
89
27
0
26 Jul 2022
Attribution-based Explanations that Provide Recourse Cannot be Robust
Attribution-based Explanations that Provide Recourse Cannot be Robust
H. Fokkema
R. D. Heide
T. Erven
FAtt
124
20
0
31 May 2022
Don't Explain Noise: Robust Counterfactuals for Randomized Ensembles
Don't Explain Noise: Robust Counterfactuals for Randomized Ensembles
Alexandre Forel
Axel Parmentier
Thibaut Vidal
66
2
0
27 May 2022
Scalable Interpretability via Polynomials
Scalable Interpretability via Polynomials
Abhimanyu Dubey
Filip Radenovic
D. Mahajan
94
32
0
27 May 2022
Constructive Interpretability with CoLabel: Corroborative Integration,
  Complementary Features, and Collaborative Learning
Constructive Interpretability with CoLabel: Corroborative Integration, Complementary Features, and Collaborative Learning
Abhijit Suprem
Sanjyot Vaidya
Suma Cherkadi
Purva Singh
J. E. Ferreira
C. Pu
62
1
0
20 May 2022
Concept Evolution in Deep Learning Training: A Unified Interpretation
  Framework and Discoveries
Concept Evolution in Deep Learning Training: A Unified Interpretation Framework and Discoveries
Haekyu Park
Seongmin Lee
Benjamin Hoover
Austin P. Wright
Omar Shaikh
Rahul Duggal
Nilaksh Das
Kevin Wenliang Li
Judy Hoffman
Duen Horng Chau
60
2
0
30 Mar 2022
123
Next