DiffIR2VR-Zero: Zero-Shot Video Restoration with Diffusion-based Image Restoration Models

We present DiffIR2VR-Zero, a zero-shot framework that enables any pre-trained image restoration diffusion model to perform high-quality video restoration without additional training. While image diffusion models have shown remarkable restoration capabilities, their direct application to video leads to temporal inconsistencies, and existing video restoration methods require extensive retraining for different degradation types. Our approach addresses these challenges through two key innovations: a hierarchical latent warping strategy that maintains consistency across both keyframes and local frames, and a hybrid token merging mechanism that adaptively combines optical flow and feature matching. Through extensive experiments, we demonstrate that our method not only maintains the high-quality restoration of base diffusion models but also achieves superior temporal consistency across diverse datasets and degradation conditions, including challenging scenarios like 8 super-resolution and severe noise. Importantly, our framework works with any image restoration diffusion model, providing a versatile solution for video enhancement without task-specific training or modifications.
View on arXiv@article{yeh2025_2407.01519, title={ DiffIR2VR-Zero: Zero-Shot Video Restoration with Diffusion-based Image Restoration Models }, author={ Chang-Han Yeh and Chin-Yang Lin and Zhixiang Wang and Chi-Wei Hsiao and Ting-Hsuan Chen and Hau-Shiang Shiu and Yu-Lun Liu }, journal={arXiv preprint arXiv:2407.01519}, year={ 2025 } }