11
1

A Plug-and-Play Method for Guided Multi-contrast MRI Reconstruction based on Content/Style Modeling

Abstract

Since multiple MRI contrasts of the same anatomy contain redundant information, one contrast can be used as a prior for guiding the reconstruction of an undersampled subsequent contrast. To this end, several learning-based guided reconstruction methods have been proposed. However, a key challenge is the requirement of large paired training datasets comprising raw data and aligned reference images. We propose a modular two-stage approach for guided reconstruction addressing this issue, which additionally provides an explanatory framework for the multi-contrast problem in terms of the shared and non-shared generative factors underlying two given contrasts. A content/style model of two-contrast image data is learned from a largely unpaired image-domain dataset and is subsequently applied as a plug-and-play operator in iterative reconstruction. The disentanglement of content and style allows explicit representation of contrast-independent and contrast-specific factors. Based on this, incorporating prior information into the reconstruction reduces to simply replacing the aliased content of the image estimate with high-quality content derived from the reference scan. Combining this component with a data consistency step and introducing a general corrective process for the content yields an iterative scheme. We name this novel approach PnP-MUNIT. Various aspects like interpretability and convergence are explored via simulations. Furthermore, its practicality is demonstrated on the NYU fastMRI DICOM dataset and two in-house multi-coil raw datasets, obtaining up to 32.6% more acceleration over learning-based non-guided reconstruction for a given SSIM. In a radiological task, PnP-MUNIT allowed 33.3% more acceleration over clinical reconstruction at diagnostic quality.

View on arXiv
@article{rao2025_2409.13477,
  title={ A Plug-and-Play Method for Guided Multi-contrast MRI Reconstruction based on Content/Style Modeling },
  author={ Chinmay Rao and Matthias van Osch and Nicola Pezzotti and Jeroen de Bresser and Laurens Beljaards and Jakob Meineke and Elwin de Weerdt and Huangling Lu and Mariya Doneva and Marius Staring },
  journal={arXiv preprint arXiv:2409.13477},
  year={ 2025 }
}
Comments on this paper