I2I-PR: Deep Iterative Refinement for Phase Retrieval using Image-to-Image Diffusion Models
- DiffM
Phase retrieval aims to recover a signal from intensity-only measurements, a fundamental problem in many fields such as imaging, holography, optical computing, crystallography, and microscopy. Although there are several well-known phase retrieval algorithms, including classical alternating projection-based solvers, the reconstruction performance often remains sensitive to initialization and measurement noise. Recently, diffusion models have gained traction in various image reconstruction tasks, yielding significant theoretical insights and practical advances. In this work, we introduce a deep iterative refinement framework that redefines the role of diffusion models in phase retrieval. Instead of generating images from random noise, our method starts with multiple physically consistent initial estimates and iteratively refines them through a learned image-to-image diffusion process. This enables data-driven phase retrieval that is both interpretable and robust, leveraging the strengths of classical solvers while mitigating their weaknesses. Furthermore, we propose an enhanced initialization strategy that integrates classical algorithms with a novel acceleration mechanism to obtain reliable initial estimates. During inference, we adopt a geometric self-ensemble strategy based on input flipping, together with output aggregation to further improve the final reconstruction quality. Comprehensive experiments demonstrate that our approach achieves substantial gains in both training efficiency and reconstruction quality, consistently outperforming classical and recent state-of-the-art methods. These results highlight the potential of diffusion-driven refinement as an effective and general framework for robust phase retrieval across diverse applications. The source code and trained models are available atthis https URL
View on arXiv