ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.08407
  4. Cited By
Explainable AI for clinical risk prediction: a survey of concepts,
  methods, and modalities

Explainable AI for clinical risk prediction: a survey of concepts, methods, and modalities

16 August 2023
Munib Mesinovic
Peter Watkinson
Ting Zhu
    FaML
ArXivPDFHTML

Papers citing "Explainable AI for clinical risk prediction: a survey of concepts, methods, and modalities"

7 / 7 papers shown
Title
Can Current Explainability Help Provide References in Clinical Notes to
  Support Humans Annotate Medical Codes?
Can Current Explainability Help Provide References in Clinical Notes to Support Humans Annotate Medical Codes?
Byung-Hak Kim
Zhongfen Deng
Philip S. Yu
Varun Ganapathi
ELM
30
6
0
28 Oct 2022
Improving ECG Classification Interpretability using Saliency Maps
Improving ECG Classification Interpretability using Saliency Maps
Yola Jones
F. Deligianni
Jeffrey Stephen Dalton
FAtt
18
19
0
10 Jan 2022
Have We Learned to Explain?: How Interpretability Methods Can Learn to
  Encode Predictions in their Interpretations
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations
N. Jethani
Mukund Sudarshan
Yindalon Aphinyanagphongs
Rajesh Ranganath
FAtt
76
70
0
02 Mar 2021
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
120
293
0
17 Oct 2019
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
294
4,143
0
23 Aug 2019
A causal framework for explaining the predictions of black-box
  sequence-to-sequence models
A causal framework for explaining the predictions of black-box sequence-to-sequence models
David Alvarez-Melis
Tommi Jaakkola
CML
219
201
0
06 Jul 2017
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,231
0
24 Jun 2017
1