ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.12344
  4. Cited By
Right for the Wrong Reason: Can Interpretable ML Techniques Detect
  Spurious Correlations?
v1v2 (latest)

Right for the Wrong Reason: Can Interpretable ML Techniques Detect Spurious Correlations?

International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2023
23 July 2023
Susu Sun
Lisa M. Koch
Christian F. Baumgartner
ArXiv (abs)PDFHTML

Papers citing "Right for the Wrong Reason: Can Interpretable ML Techniques Detect Spurious Correlations?"

16 / 16 papers shown
MIMM-X: Disentangling Spurious Correlations for Medical Image Analysis
MIMM-X: Disentangling Spurious Correlations for Medical Image Analysis
Louisa Fay
Hajer Reguigui
Bin Yang
S. Gatidis
Thomas Kustner
112
0
0
28 Nov 2025
Label-free estimation of clinically relevant performance metrics under distribution shifts
Label-free estimation of clinically relevant performance metrics under distribution shifts
Tim Flühmann
Alceu Bissoto
Trung-Dung Hoang
Lisa M. Koch
OOD
89
0
0
30 Jul 2025
Subgroup Performance Analysis in Hidden StratificationsInternational Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2025
Alceu Bissoto
Trung-Dung Hoang
Tim Flühmann
Susu Sun
Christian F. Baumgartner
Lisa M. Koch
OOD
249
0
0
13 Mar 2025
Prototype-Based Multiple Instance Learning for Gigapixel Whole Slide Image Classification
Prototype-Based Multiple Instance Learning for Gigapixel Whole Slide Image ClassificationInternational Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2025
Susu Sun
Dominique van Midden
G. Litjens
Christian F. Baumgartner
325
3
0
11 Mar 2025
Mask of truth: model sensitivity to unexpected regions of medical images
Mask of truth: model sensitivity to unexpected regions of medical images
Théo Sourget
Michelle Hestbek-Møller
Amelia Jiménez-Sánchez
Jack Junchi Xu
Veronika Cheplygina
AAML
523
2
0
05 Dec 2024
Benchmarking Dependence Measures to Prevent Shortcut Learning in Medical
  Imaging
Benchmarking Dependence Measures to Prevent Shortcut Learning in Medical Imaging
Sarah Muller
Louisa Fay
Lisa M. Koch
S. Gatidis
Thomas Kustner
Philipp Berens
CMLOOD
235
2
0
26 Jul 2024
Characterizing the Interpretability of Attention Maps in Digital
  Pathology
Characterizing the Interpretability of Attention Maps in Digital Pathology
Tomé Albuquerque
Anil Yüce
Markus D. Herrmann
Alvaro Gomariz
147
4
0
02 Jul 2024
ViG-Bias: Visually Grounded Bias Discovery and Mitigation
ViG-Bias: Visually Grounded Bias Discovery and Mitigation
Badr-Eddine Marani
Mohamed Hanini
Nihitha Malayarukil
Stergios Christodoulidis
Maria Vakalopoulou
Enzo Ferrante
249
2
0
02 Jul 2024
Attri-Net: A Globally and Locally Inherently Interpretable Model for Multi-Label Classification Using Class-Specific Counterfactuals
Attri-Net: A Globally and Locally Inherently Interpretable Model for Multi-Label Classification Using Class-Specific CounterfactualsMachine Learning for Biomedical Imaging (MLBI), 2024
Susu Sun
S. Woerner
Andreas Maier
Lisa M. Koch
Christian F. Baumgartner
FAtt
309
2
0
08 Jun 2024
DeCoDEx: Confounder Detector Guidance for Improved Diffusion-based
  Counterfactual Explanations
DeCoDEx: Confounder Detector Guidance for Improved Diffusion-based Counterfactual ExplanationsInternational Conference on Medical Imaging with Deep Learning (MIDL), 2024
Nima Fathi
Amar Kumar
Brennan Nichyporuk
Mohammad Havaei
Tal Arbel
DiffMCML
298
5
0
15 May 2024
Enhancing Interpretability of Vertebrae Fracture Grading using
  Human-interpretable Prototypes
Enhancing Interpretability of Vertebrae Fracture Grading using Human-interpretable PrototypesMachine Learning for Biomedical Imaging (MLBI), 2024
Poulami Sinhamahapatra
Antonio Terpin
Anjany Sekuboyina
M. Husseini
D. Schinz
Nicolas Lenhart
Bjoern Menze
Jan Kirschke
Karsten Roscher
Stephan Guennemann
247
2
0
03 Apr 2024
Source Matters: Source Dataset Impact on Model Robustness in Medical
  Imaging
Source Matters: Source Dataset Impact on Model Robustness in Medical Imaging
Dovile Juodelyte
Yucheng Lu
Amelia Jiménez-Sánchez
Sabrina Bottazzi
Enzo Ferrante
Veronika Cheplygina
OOD
235
13
0
07 Mar 2024
Fast Diffusion-Based Counterfactuals for Shortcut Removal and Generation
Fast Diffusion-Based Counterfactuals for Shortcut Removal and Generation
Nina Weng
Paraskevas Pegios
Eike Petersen
Aasa Feragen
Siavash Bigdeli
MedImCML
307
22
0
21 Dec 2023
A Framework for Interpretability in Machine Learning for Medical Imaging
A Framework for Interpretability in Machine Learning for Medical ImagingIEEE Access (IEEE Access), 2023
Alan Q. Wang
Batuhan K. Karaman
Heejong Kim
Jacob Rosenthal
Rachit Saluja
Sean I. Young
M. Sabuncu
AI4CE
428
22
0
02 Oct 2023
Why is plausibility surprisingly problematic as an XAI criterion?
Why is plausibility surprisingly problematic as an XAI criterion?
Weina Jin
Xiaoxiao Li
Ghassan Hamarneh
367
10
0
30 Mar 2023
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
2.5K
19,701
0
16 Feb 2016
1