DVG-Diffusion: Dual-View Guided Diffusion Model for CT Reconstruction from X-Rays

Directly reconstructing 3D CT volume from few-view 2D X-rays using an end-to-end deep learning network is a challenging task, as X-ray images are merely projection views of the 3D CT volume. In this work, we facilitate complex 2D X-ray image to 3D CT mapping by incorporating new view synthesis, and reduce the learning difficulty through view-guided feature alignment. Specifically, we propose a dual-view guided diffusion model (DVG-Diffusion), which couples a real input X-ray view and a synthesized new X-ray view to jointly guide CT reconstruction. First, a novel view parameter-guided encoder captures features from X-rays that are spatially aligned with CT. Next, we concatenate the extracted dual-view features as conditions for the latent diffusion model to learn and refine the CT latent representation. Finally, the CT latent representation is decoded into a CT volume in pixel space. By incorporating view parameter guided encoding and dual-view guided CT reconstruction, our DVG-Diffusion can achieve an effective balance between high fidelity and perceptual quality for CT reconstruction. Experimental results demonstrate our method outperforms state-of-the-art methods. Based on experiments, the comprehensive analysis and discussions for views and reconstruction are also presented.
View on arXiv@article{xie2025_2503.17804, title={ DVG-Diffusion: Dual-View Guided Diffusion Model for CT Reconstruction from X-Rays }, author={ Xing Xie and Jiawei Liu and Huijie Fan and Zhi Han and Yandong Tang and Liangqiong Qu }, journal={arXiv preprint arXiv:2503.17804}, year={ 2025 } }