J-MoDL: Joint Model-Based Deep Learning for Optimized Sampling and
Reconstruction
Modern MRI schemes, which rely on compressed sensing or deep learning algorithms to recover MRI data from undersampled multichannel Fourier measurements, are widely used to reduce scan time. The image quality of these approaches is heavily dependent on the sampling pattern. We introduce a continuous strategy to jointly optimize the sampling pattern and the parameters of the reconstruction algorithm. We propose to use a model-based deep learning (MoDL) image reconstruction algorithm, which alternates between a data consistency module and a convolutional neural network (CNN). We use a multi-channel forward model, consisting of a non-uniform Fourier transform with continuously defined sampling locations, to realize the data consistency block. This approach facilitates the joint and continuous optimization of the sampling pattern and the CNN parameters. We observe that the joint optimization of the sampling patterns and the reconstruction module significantly improves the performance, compared to current deep learning methods that use variable density sampling patterns. Our experiments show that the improved decoupling of the CNN parameters from the sampling scheme offered by the MoDL scheme translates to improved optimization and performance compared to a similar scheme using a direct-inversion based reconstruction algorithm. The experiments also show that the proposed scheme offers good convergence and reduces the dependence on initialization.
View on arXiv