37
0

Coding-Prior Guided Diffusion Network for Video Deblurring

Abstract

While recent video deblurring methods have advanced significantly, they often overlook two valuable prior information: (1) motion vectors (MVs) and coding residuals (CRs) from video codecs, which provide efficient inter-frame alignment cues, and (2) the rich real-world knowledge embedded in pre-trained diffusion generative models. We present CPGDNet, a novel two-stage framework that effectively leverages both coding priors and generative diffusion priors for high-quality deblurring. First, our coding-prior feature propagation (CPFP) module utilizes MVs for efficient frame alignment and CRs to generate attention masks, addressing motion inaccuracies and texture variations. Second, a coding-prior controlled generation (CPC) module network integrates coding priors into a pretrained diffusion model, guiding it to enhance critical regions and synthesize realistic details. Experiments demonstrate our method achieves state-of-the-art perceptual quality with up to 30% improvement in IQA metrics. Both the code and the codingprior-augmented dataset will be open-sourced.

View on arXiv
@article{liu2025_2504.12222,
  title={ Coding-Prior Guided Diffusion Network for Video Deblurring },
  author={ Yike Liu and Jianhui Zhang and Haipeng Li and Shuaicheng Liu and Bing Zeng },
  journal={arXiv preprint arXiv:2504.12222},
  year={ 2025 }
}
Comments on this paper