ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.00364
  4. Cited By
The future of human-centric eXplainable Artificial Intelligence (XAI) is
  not post-hoc explanations

The future of human-centric eXplainable Artificial Intelligence (XAI) is not post-hoc explanations

1 July 2023
Vinitra Swamy
Jibril Frej
Tanja Kaser
ArXivPDFHTML

Papers citing "The future of human-centric eXplainable Artificial Intelligence (XAI) is not post-hoc explanations"

12 / 12 papers shown
Title
In defence of post-hoc explanations in medical AI
In defence of post-hoc explanations in medical AI
Joshua Hatherley
Lauritz Munch
Jens Christian Bjerring
24
0
0
29 Apr 2025
A constraints-based approach to fully interpretable neural networks for detecting learner behaviors
A constraints-based approach to fully interpretable neural networks for detecting learner behaviors
Juan D. Pinto
Luc Paquette
27
0
0
10 Apr 2025
PRECISe : Prototype-Reservation for Explainable Classification under
  Imbalanced and Scarce-Data Settings
PRECISe : Prototype-Reservation for Explainable Classification under Imbalanced and Scarce-Data Settings
Vaibhav Ganatra
Drishti Goel
37
0
0
11 Aug 2024
Interpret3C: Interpretable Student Clustering Through Individualized
  Feature Selection
Interpret3C: Interpretable Student Clustering Through Individualized Feature Selection
Isadora Salles
Paola Mejia-Domenzain
Vinitra Swamy
Julian Blackwell
Tanja Kaser
18
1
0
28 May 2024
Towards a Unified Framework for Evaluating Explanations
Towards a Unified Framework for Evaluating Explanations
Juan D. Pinto
Luc Paquette
12
1
0
22 May 2024
Deep Learning for Educational Data Science
Deep Learning for Educational Data Science
Juan D. Pinto
Luc Paquette
25
0
0
12 Apr 2024
InterpretCC: Intrinsic User-Centric Interpretability through Global
  Mixture of Experts
InterpretCC: Intrinsic User-Centric Interpretability through Global Mixture of Experts
Vinitra Swamy
Syrielle Montariol
Julian Blackwell
Jibril Frej
Martin Jaggi
Tanja Kaser
29
3
0
05 Feb 2024
Justifiable Artificial Intelligence: Engineering Large Language Models
  for Legal Applications
Justifiable Artificial Intelligence: Engineering Large Language Models for Legal Applications
Sabine Wehnert
AILaw
26
1
0
27 Nov 2023
How Well Do Feature-Additive Explainers Explain Feature-Additive
  Predictors?
How Well Do Feature-Additive Explainers Explain Feature-Additive Predictors?
Zachariah Carmichael
Walter J. Scheirer
FAtt
15
3
0
27 Oct 2023
Fairness via Explanation Quality: Evaluating Disparities in the Quality
  of Post hoc Explanations
Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations
Jessica Dai
Sohini Upadhyay
Ulrich Aivodji
Stephen H. Bach
Himabindu Lakkaraju
35
55
0
15 May 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
157
181
0
03 Feb 2022
On Interpretability of Deep Learning based Skin Lesion Classifiers using
  Concept Activation Vectors
On Interpretability of Deep Learning based Skin Lesion Classifiers using Concept Activation Vectors
Adriano Lucieri
Muhammad Naseer Bajwa
S. Braun
M. I. Malik
Andreas Dengel
Sheraz Ahmed
MedIm
149
55
0
05 May 2020
1