ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.01960
28
1

Zero-Shot Video Restoration and Enhancement Using Pre-Trained Image Diffusion Model

2 July 2024
Cong Cao
Huanjing Yue
Xin Liu
Jingyu Yang
    DiffM
    VGen
ArXivPDFHTML
Abstract

Diffusion-based zero-shot image restoration and enhancement models have achieved great success in various tasks of image restoration and enhancement. However, directly applying them to video restoration and enhancement results in severe temporal flickering artifacts. In this paper, we propose the first framework for zero-shot video restoration and enhancement based on the pre-trained image diffusion model. By replacing the spatial self-attention layer with the proposed short-long-range (SLR) temporal attention layer, the pre-trained image diffusion model can take advantage of the temporal correlation between frames. We further propose temporal consistency guidance, spatial-temporal noise sharing, and an early stopping sampling strategy to improve temporally consistent sampling. Our method is a plug-and-play module that can be inserted into any diffusion-based image restoration or enhancement methods to further improve their performance. Experimental results demonstrate the superiority of our proposed method. Our code is available atthis https URL.

View on arXiv
@article{cao2025_2407.01960,
  title={ Zero-Shot Video Restoration and Enhancement Using Pre-Trained Image Diffusion Model },
  author={ Cong Cao and Huanjing Yue and Xin Liu and Jingyu Yang },
  journal={arXiv preprint arXiv:2407.01960},
  year={ 2025 }
}
Comments on this paper