ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.05134
  4. Cited By
What Clinicians Want: Contextualizing Explainable Machine Learning for
  Clinical End Use

What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use

13 May 2019
S. Tonekaboni
Shalmali Joshi
M. Mccradden
Anna Goldenberg
ArXivPDFHTML

Papers citing "What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use"

50 / 50 papers shown
Title
Early Detection of Patient Deterioration from Real-Time Wearable Monitoring System
Early Detection of Patient Deterioration from Real-Time Wearable Monitoring System
Lo Pang-Yun Ting
Hong-Pei Chen
An-Shan Liu
Chun-Yin Yeh
Po-Lin Chen
Kun-Ta Chuang
26
0
0
02 May 2025
ProtoECGNet: Case-Based Interpretable Deep Learning for Multi-Label ECG Classification with Contrastive Learning
ProtoECGNet: Case-Based Interpretable Deep Learning for Multi-Label ECG Classification with Contrastive Learning
Shri Kiran Srinivasan
David Chen
Thomas Statchen
Michael C. Burkhart
Nipun Bhandari
Bashar Ramadan
Brett Beaulieu-Jones
45
1
0
11 Apr 2025
No Black Box Anymore: Demystifying Clinical Predictive Modeling with Temporal-Feature Cross Attention Mechanism
No Black Box Anymore: Demystifying Clinical Predictive Modeling with Temporal-Feature Cross Attention Mechanism
Yubo Li
Xinyu Yao
R. Padman
FAtt
AI4TS
54
0
0
25 Mar 2025
Self-Explaining Hypergraph Neural Networks for Diagnosis Prediction
Self-Explaining Hypergraph Neural Networks for Diagnosis Prediction
Leisheng Yu
Yanxiao Cai
Minxing Zhang
Xia Hu
FAtt
225
0
0
15 Feb 2025
Controlling for Unobserved Confounding with Large Language Model
  Classification of Patient Smoking Status
Controlling for Unobserved Confounding with Large Language Model Classification of Patient Smoking Status
Samuel Lee
Zach Wood-Doughty
CML
50
0
0
05 Nov 2024
Rideshare Transparency: Translating Gig Worker Insights on AI Platform
  Design to Policy
Rideshare Transparency: Translating Gig Worker Insights on AI Platform Design to Policy
Varun Nagaraj Rao
Samantha Dalal
Eesha Agarwal
D. Calacci
Andrés Monroy-Hernández
29
2
0
16 Jun 2024
Towards Optimising EEG Decoding using Post-hoc Explanations and Domain
  Knowledge
Towards Optimising EEG Decoding using Post-hoc Explanations and Domain Knowledge
Param S. Rajpura
Y. Meena
26
0
0
02 May 2024
Designing for Complementarity: A Conceptual Framework to Go Beyond the
  Current Paradigm of Using XAI in Healthcare
Designing for Complementarity: A Conceptual Framework to Go Beyond the Current Paradigm of Using XAI in Healthcare
Elisa Rubegni
Omran Ayoub
Stefania Maria Rita Rizzo
Marco Barbero
G. Bernegger
Francesca Faraci
Francesca Mangili
Emiliano Soldini
P. Trimboli
Alessandro Facchini
31
1
0
06 Apr 2024
Inadequacy of common stochastic neural networks for reliable clinical
  decision support
Inadequacy of common stochastic neural networks for reliable clinical decision support
Adrian Lindenmeyer
Malte Blattmann
S. Franke
Thomas Neumuth
Daniel Schneider
BDL
37
1
0
24 Jan 2024
Elucidating Discrepancy in Explanations of Predictive Models Developed
  using EMR
Elucidating Discrepancy in Explanations of Predictive Models Developed using EMR
A. Brankovic
Wenjie Huang
David Cook
Sankalp Khanna
K. Bialkowski
11
2
0
28 Nov 2023
Predictability and Comprehensibility in Post-Hoc XAI Methods: A
  User-Centered Analysis
Predictability and Comprehensibility in Post-Hoc XAI Methods: A User-Centered Analysis
Anahid N. Jalali
Bernhard Haslhofer
Simone Kriglstein
Andreas Rauber
FAtt
37
4
0
21 Sep 2023
Towards a Comprehensive Human-Centred Evaluation Framework for
  Explainable AI
Towards a Comprehensive Human-Centred Evaluation Framework for Explainable AI
Ivania Donoso-Guzmán
Jeroen Ooge
Denis Parra
K. Verbert
50
6
0
31 Jul 2023
Evaluation of Popular XAI Applied to Clinical Prediction Models: Can
  They be Trusted?
Evaluation of Popular XAI Applied to Clinical Prediction Models: Can They be Trusted?
A. Brankovic
David Cook
Jessica Rahman
Wenjie Huang
Sankalp Khanna
31
1
0
21 Jun 2023
Counterfactual Explanations and Predictive Models to Enhance Clinical
  Decision-Making in Schizophrenia using Digital Phenotyping
Counterfactual Explanations and Predictive Models to Enhance Clinical Decision-Making in Schizophrenia using Digital Phenotyping
Juan Sebastián Canas
Francisco Gomez
Omar Costilla-Reyes
21
1
0
06 Jun 2023
Appraising the Potential Uses and Harms of LLMs for Medical Systematic
  Reviews
Appraising the Potential Uses and Harms of LLMs for Medical Systematic Reviews
Hye Sun Yun
Iain J. Marshall
T. Trikalinos
Byron C. Wallace
26
17
0
19 May 2023
A Review on Explainable Artificial Intelligence for Healthcare: Why,
  How, and When?
A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?
M. Rubaiyat
Hossain Mondal
Prajoy Podder
28
56
0
10 Apr 2023
Dermatologist-like explainable AI enhances trust and confidence in
  diagnosing melanoma
Dermatologist-like explainable AI enhances trust and confidence in diagnosing melanoma
T. Chanda
Katja Hauser
S. Hobelsberger
Tabea-Clara Bucher
Carina Nogueira Garcia
...
J. Utikal
K. Ghoreschi
S. Fröhling
E. Krieghoff-Henning
T. Brinker
26
67
0
17 Mar 2023
Informing clinical assessment by contextualizing post-hoc explanations
  of risk prediction models in type-2 diabetes
Informing clinical assessment by contextualizing post-hoc explanations of risk prediction models in type-2 diabetes
Shruthi Chari
Prasanth Acharya
Daniel Gruen
Olivia R. Zhang
Elif Eyigoz
...
Oshani Seneviratne
Fernando Jose Suarez Saiz
Pablo Meyer
Prithwish Chakraborty
D. McGuinness
32
17
0
11 Feb 2023
Ignore, Trust, or Negotiate: Understanding Clinician Acceptance of
  AI-Based Treatment Recommendations in Health Care
Ignore, Trust, or Negotiate: Understanding Clinician Acceptance of AI-Based Treatment Recommendations in Health Care
Venkatesh Sivaraman
L. Bukowski
J. Levin
J. Kahn
Adam Perer
40
83
0
31 Jan 2023
Towards Reconciling Usability and Usefulness of Explainable AI
  Methodologies
Towards Reconciling Usability and Usefulness of Explainable AI Methodologies
Pradyumna Tambwekar
Matthew C. Gombolay
36
8
0
13 Jan 2023
Context-dependent Explainability and Contestability for Trustworthy
  Medical Artificial Intelligence: Misclassification Identification of
  Morbidity Recognition Models in Preterm Infants
Context-dependent Explainability and Contestability for Trustworthy Medical Artificial Intelligence: Misclassification Identification of Morbidity Recognition Models in Preterm Infants
Isil Guzey
Ozlem Ucar
N. A. Çiftdemir
B. Acunaş
25
1
0
17 Dec 2022
Explainability of Traditional and Deep Learning Models on Longitudinal
  Healthcare Records
Explainability of Traditional and Deep Learning Models on Longitudinal Healthcare Records
L. Cheong
Tesfagabir Meharizghi
Wynona Black
Yang Guang
Weilin Meng
FAtt
AI4TS
8
0
0
22 Nov 2022
Fully-attentive and interpretable: vision and video vision transformers
  for pain detection
Fully-attentive and interpretable: vision and video vision transformers for pain detection
Giacomo Fiorentini
Itir Onal Ertugrul
A. A. Salah
MedIm
ViT
21
2
0
27 Oct 2022
Trustworthy clinical AI solutions: a unified review of uncertainty
  quantification in deep learning models for medical image analysis
Trustworthy clinical AI solutions: a unified review of uncertainty quantification in deep learning models for medical image analysis
Benjamin Lambert
Florence Forbes
A. Tucholka
Senan Doyle
Harmonie Dehaene
M. Dojat
34
81
0
05 Oct 2022
"Help Me Help the AI": Understanding How Explainability Can Support
  Human-AI Interaction
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Sunnie S. Y. Kim
E. A. Watkins
Olga Russakovsky
Ruth C. Fong
Andrés Monroy-Hernández
43
108
0
02 Oct 2022
Explainable Artificial Intelligence Applications in Cyber Security:
  State-of-the-Art in Research
Explainable Artificial Intelligence Applications in Cyber Security: State-of-the-Art in Research
Zhibo Zhang
H. A. Hamadi
Ernesto Damiani
C. Yeun
Fatma Taher
AAML
34
148
0
31 Aug 2022
Exploring How Anomalous Model Input and Output Alerts Affect
  Decision-Making in Healthcare
Exploring How Anomalous Model Input and Output Alerts Affect Decision-Making in Healthcare
Marissa Radensky
Dustin Burson
Rajya Bhaiya
Daniel S. Weld
29
0
0
27 Apr 2022
Example-based Explanations with Adversarial Attacks for Respiratory
  Sound Analysis
Example-based Explanations with Adversarial Attacks for Respiratory Sound Analysis
Yi Chang
Zhao Ren
THANH VAN NGUYEN
Wolfgang Nejdl
Björn W. Schuller
AAML
33
14
0
30 Mar 2022
Trust in AI: Interpretability is not necessary or sufficient, while
  black-box interaction is necessary and sufficient
Trust in AI: Interpretability is not necessary or sufficient, while black-box interaction is necessary and sufficient
Max W. Shen
27
18
0
10 Feb 2022
Towards a Shapley Value Graph Framework for Medical peer-influence
Towards a Shapley Value Graph Framework for Medical peer-influence
J. Duell
M. Seisenberger
Gert Aarts
Shang-Ming Zhou
Xiuyi Fan
21
0
0
29 Dec 2021
Explainable Deep Learning in Healthcare: A Methodological Survey from an
  Attribution View
Explainable Deep Learning in Healthcare: A Methodological Survey from an Attribution View
Di Jin
Elena Sergeeva
W. Weng
Geeticka Chauhan
Peter Szolovits
OOD
53
55
0
05 Dec 2021
Self-explaining Neural Network with Concept-based Explanations for ICU
  Mortality Prediction
Self-explaining Neural Network with Concept-based Explanations for ICU Mortality Prediction
Sayantan Kumar
Sean C. Yu
Thomas Kannampallil
Zachary B. Abrams
Andrew Michelson
Philip R. O. Payne
FAtt
19
7
0
09 Oct 2021
BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
  Diagnosis in Breast Ultrasound Images
BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer Diagnosis in Breast Ultrasound Images
Boyu Zhang
Aleksandar Vakanski
Min Xian
30
11
0
05 Oct 2021
VBridge: Connecting the Dots Between Features and Data to Explain
  Healthcare Models
VBridge: Connecting the Dots Between Features and Data to Explain Healthcare Models
Furui Cheng
Dongyu Liu
F. Du
Yanna Lin
Alexandra Zytek
Haomin Li
Huamin Qu
K. Veeramachaneni
24
37
0
04 Aug 2021
A Survey on Graph-Based Deep Learning for Computational Histopathology
A Survey on Graph-Based Deep Learning for Computational Histopathology
David Ahmedt-Aristizabal
M. Armin
Simon Denman
Clinton Fookes
L. Petersson
GNN
AI4CE
26
108
0
01 Jul 2021
A Review on Explainability in Multimodal Deep Neural Nets
A Review on Explainability in Multimodal Deep Neural Nets
Gargi Joshi
Rahee Walambe
K. Kotecha
29
140
0
17 May 2021
Learning to Predict with Supporting Evidence: Applications to Clinical
  Risk Prediction
Learning to Predict with Supporting Evidence: Applications to Clinical Risk Prediction
Aniruddh Raghu
John Guttag
K. Young
E. Pomerantsev
Adrian Dalca
Collin M. Stultz
13
9
0
04 Mar 2021
Towards Personalized Federated Learning
Towards Personalized Federated Learning
A. Tan
Han Yu
Li-zhen Cui
Qiang Yang
FedML
AI4CE
209
847
0
01 Mar 2021
Intuitively Assessing ML Model Reliability through Example-Based
  Explanations and Editing Model Inputs
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Harini Suresh
Kathleen M. Lewis
John Guttag
Arvind Satyanarayan
FAtt
42
25
0
17 Feb 2021
Designing AI for Trust and Collaboration in Time-Constrained Medical
  Decisions: A Sociotechnical Lens
Designing AI for Trust and Collaboration in Time-Constrained Medical Decisions: A Sociotechnical Lens
Maia L. Jacobs
Jeffrey He
Melanie F. Pradier
Barbara D. Lam
Andrew C Ahn
T. McCoy
R. Perlis
Finale Doshi-Velez
Krzysztof Z. Gajos
49
145
0
01 Feb 2021
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders
  of Interpretable Machine Learning and their Needs
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Harini Suresh
Steven R. Gomez
K. Nam
Arvind Satyanarayan
34
126
0
24 Jan 2021
Explaining the Black-box Smoothly- A Counterfactual Approach
Explaining the Black-box Smoothly- A Counterfactual Approach
Junyu Chen
Yong Du
Yufan He
W. Paul Segars
Ye Li
MedIm
FAtt
67
100
0
11 Jan 2021
Concept-based model explanations for Electronic Health Records
Concept-based model explanations for Electronic Health Records
Diana Mincu
Eric Loreaux
Shaobo Hou
Sebastien Baur
Ivan V. Protsyuk
Martin G. Seneviratne
A. Mottram
Nenad Tomašev
Alan Karthikesanlingam
Jessica Schrouff
11
27
0
03 Dec 2020
A Survey on Deep Learning and Explainability for Automatic Report
  Generation from Medical Images
A Survey on Deep Learning and Explainability for Automatic Report Generation from Medical Images
Pablo Messina
Pablo Pino
Denis Parra
Alvaro Soto
Cecilia Besa
S. Uribe
Marcelo andía
C. Tejos
Claudia Prieto
Daniel Capurro
MedIm
36
62
0
20 Oct 2020
Uncertainty-Aware Deep Ensembles for Reliable and Explainable
  Predictions of Clinical Time Series
Uncertainty-Aware Deep Ensembles for Reliable and Explainable Predictions of Clinical Time Series
Kristoffer Wickstrøm
Karl Øyvind Mikalsen
Michael C. Kampffmeyer
A. Revhaug
Robert Jenssen
AI4TS
30
34
0
16 Oct 2020
Assessing the (Un)Trustworthiness of Saliency Maps for Localizing
  Abnormalities in Medical Imaging
Assessing the (Un)Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging
N. Arun
N. Gaw
P. Singh
Ken Chang
M. Aggarwal
...
J. Patel
M. Gidwani
Julius Adebayo
M. D. Li
Jayashree Kalpathy-Cramer
FAtt
32
109
0
06 Aug 2020
Generative causal explanations of black-box classifiers
Generative causal explanations of black-box classifiers
Matthew R. O’Shaughnessy
Gregory H. Canal
Marissa Connor
Mark A. Davenport
Christopher Rozell
CML
30
73
0
24 Jun 2020
What went wrong and when? Instance-wise Feature Importance for
  Time-series Models
What went wrong and when? Instance-wise Feature Importance for Time-series Models
S. Tonekaboni
Shalmali Joshi
Kieran Campbell
David Duvenaud
Anna Goldenberg
FAtt
OOD
AI4TS
53
14
0
05 Mar 2020
When Segmentation is Not Enough: Rectifying Visual-Volume Discordance
  Through Multisensor Depth-Refined Semantic Segmentation for Food Intake
  Tracking in Long-Term Care
When Segmentation is Not Enough: Rectifying Visual-Volume Discordance Through Multisensor Depth-Refined Semantic Segmentation for Food Intake Tracking in Long-Term Care
Kaylen J. Pfisterer
Robert Amelard
A. Chung
Braeden Syrnyk
Alexander MacLean
Heather H. Keller
A. Wong
22
18
0
24 Oct 2019
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
287
9,156
0
06 Jun 2015
1