ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.01418
53
1

Assessing the use of Diffusion models for motion artifact correction in brain MRI

3 February 2025
Paolo Angella
Vito Paolo Pastore
Matteo Santacesaria
    MedIm
    DiffM
ArXivPDFHTML
Abstract

Magnetic Resonance Imaging generally requires long exposure times, while being sensitive to patient motion, resulting in artifacts in the acquired images, which may hinder their diagnostic relevance. Despite research efforts to decrease the acquisition time, and designing efficient acquisition sequences, motion artifacts are still a persistent problem, pushing toward the need for the development of automatic motion artifact correction techniques. Recently, diffusion models have been proposed as a solution for the task at hand. While diffusion models can produce high-quality reconstructions, they are also susceptible to hallucination, which poses risks in diagnostic applications. In this study, we critically evaluate the use of diffusion models for correcting motion artifacts in 2D brain MRI scans. Using a popular benchmark dataset, we compare a diffusion model-based approach with state-of-the-art methods consisting of Unets trained in a supervised fashion on motion-affected images to reconstruct ground truth motion-free images. Our findings reveal mixed results: diffusion models can produce accurate predictions or generate harmful hallucinations in this context, depending on data heterogeneity and the acquisition planes considered as input.

View on arXiv
@article{angella2025_2502.01418,
  title={ Assessing the use of Diffusion models for motion artifact correction in brain MRI },
  author={ Paolo Angella and Vito Paolo Pastore and Matteo Santacesaria },
  journal={arXiv preprint arXiv:2502.01418},
  year={ 2025 }
}
Comments on this paper