ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.14463
53
0

SIR-DIFF: Sparse Image Sets Restoration with Multi-View Diffusion Model

18 March 2025
Yucheng Mao
Boyang Wang
Nilesh Kulkarni
Jeong Joon Park
    DiffM
ArXivPDFHTML
Abstract

The computer vision community has developed numerous techniques for digitally restoring true scene information from single-view degraded photographs, an important yet extremely ill-posed task. In this work, we tackle image restoration from a different perspective by jointly denoising multiple photographs of the same scene. Our core hypothesis is that degraded images capturing a shared scene contain complementary information that, when combined, better constrains the restoration problem. To this end, we implement a powerful multi-view diffusion model that jointly generates uncorrupted views by extracting rich information from multi-view relationships. Our experiments show that our multi-view approach outperforms existing single-view image and even video-based methods on image deblurring and super-resolution tasks. Critically, our model is trained to output 3D consistent images, making it a promising tool for applications requiring robust multi-view integration, such as 3D reconstruction or pose estimation.

View on arXiv
@article{mao2025_2503.14463,
  title={ SIR-DIFF: Sparse Image Sets Restoration with Multi-View Diffusion Model },
  author={ Yucheng Mao and Boyang Wang and Nilesh Kulkarni and Jeong Joon Park },
  journal={arXiv preprint arXiv:2503.14463},
  year={ 2025 }
}
Comments on this paper