ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.12865
35
6

RadEdit: stress-testing biomedical vision models via diffusion image editing

20 December 2023
Fernando Pérez-García
Sam Bond-Taylor
Pedro P. Sanchez
B. V. Breugel
Daniel Coelho De Castro
Harshita Sharma
Valentina Salvatelli
Maria T. A. Wetscherek
Hannah Richardson
M. Lungren
A. Nori
Javier Alvarez-Valle
Ozan Oktay
Maximilian Ilse
    MedIm
ArXivPDFHTML
Abstract

Biomedical imaging datasets are often small and biased, meaning that real-world performance of predictive models can be substantially lower than expected from internal testing. This work proposes using generative image editing to simulate dataset shifts and diagnose failure modes of biomedical vision models; this can be used in advance of deployment to assess readiness, potentially reducing cost and patient harm. Existing editing methods can produce undesirable changes, with spurious correlations learned due to the co-occurrence of disease and treatment interventions, limiting practical applicability. To address this, we train a text-to-image diffusion model on multiple chest X-ray datasets and introduce a new editing method RadEdit that uses multiple masks, if present, to constrain changes and ensure consistency in the edited images. We consider three types of dataset shifts: acquisition shift, manifestation shift, and population shift, and demonstrate that our approach can diagnose failures and quantify model robustness without additional data collection, complementing more qualitative tools for explainable AI.

View on arXiv
Comments on this paper