ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.07387
  4. Cited By
Do Explanations Reflect Decisions? A Machine-centric Strategy to
  Quantify the Performance of Explainability Algorithms

Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms

16 October 2019
Z. Q. Lin
M. Shafiee
S. Bochkarev
Michael St. Jules
Xiao Yu Wang
A. Wong
    FAtt
ArXivPDFHTML

Papers citing "Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms"

10 / 10 papers shown
Title
Interactive Diabetes Risk Prediction Using Explainable Machine Learning: A Dash-Based Approach with SHAP, LIME, and Comorbidity Insights
Interactive Diabetes Risk Prediction Using Explainable Machine Learning: A Dash-Based Approach with SHAP, LIME, and Comorbidity Insights
Udaya Allani
FAtt
33
0
0
08 May 2025
An adversarial attack approach for eXplainable AI evaluation on deepfake
  detection models
An adversarial attack approach for eXplainable AI evaluation on deepfake detection models
Balachandar Gowrisankar
V. Thing
AAML
28
11
0
08 Dec 2023
Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme
  Recognition
Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme Recognition
Xiao-lan Wu
P. Bell
A. Rajan
19
5
0
29 May 2023
COVID-Net USPro: An Open-Source Explainable Few-Shot Deep Prototypical
  Network to Monitor and Detect COVID-19 Infection from Point-of-Care
  Ultrasound Images
COVID-Net USPro: An Open-Source Explainable Few-Shot Deep Prototypical Network to Monitor and Detect COVID-19 Infection from Point-of-Care Ultrasound Images
Jessy Song
Ashkan Ebadi
A. Florea
Pengcheng Xi
Stéphane Tremblay
Alexander Wong
27
0
0
04 Jan 2023
Evaluating Feature Attribution Methods in the Image Domain
Evaluating Feature Attribution Methods in the Image Domain
Arne Gevaert
Axel-Jan Rousseau
Thijs Becker
D. Valkenborg
T. D. Bie
Yvan Saeys
FAtt
21
22
0
22 Feb 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified
  Perturbation Analysis
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
23
41
0
15 Feb 2022
COVID-Net CXR-S: Deep Convolutional Neural Network for Severity
  Assessment of COVID-19 Cases from Chest X-ray Images
COVID-Net CXR-S: Deep Convolutional Neural Network for Severity Assessment of COVID-19 Cases from Chest X-ray Images
Hossein Aboutalebi
Maya Pavlova
M. Shafiee
A. Sabri
Amer Alaref
Alexander Wong
13
31
0
01 May 2021
Fibrosis-Net: A Tailored Deep Convolutional Neural Network Design for
  Prediction of Pulmonary Fibrosis Progression from Chest CT Images
Fibrosis-Net: A Tailored Deep Convolutional Neural Network Design for Prediction of Pulmonary Fibrosis Progression from Chest CT Images
A. Wong
Jack Lu
Adam Dorfman
Paul McInnis
M. Famouri
Daniel Manary
J. Lee
Michael Lynch
AI4CE
20
18
0
06 Mar 2021
Explainable deep learning models in medical image analysis
Explainable deep learning models in medical image analysis
Amitojdeep Singh
S. Sengupta
Vasudevan Lakshminarayanan
XAI
29
482
0
28 May 2020
Explaining Explanations: Axiomatic Feature Interactions for Deep
  Networks
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Joseph D. Janizek
Pascal Sturmfels
Su-In Lee
FAtt
24
143
0
10 Feb 2020
1