28
20

How I Warped Your Noise: a Temporally-Correlated Noise Prior for Diffusion Models

Abstract

Video editing and generation methods often rely on pre-trained image-based diffusion models. During the diffusion process, however, the reliance on rudimentary noise sampling techniques that do not preserve correlations present in subsequent frames of a video is detrimental to the quality of the results. This either produces high-frequency flickering, or texture-sticking artifacts that are not amenable to post-processing. With this in mind, we propose a novel method for preserving temporal correlations in a sequence of noise samples. This approach is materialized by a novel noise representation, dubbed \int-noise (integral noise), that reinterprets individual noise samples as a continuously integrated noise field: pixel values do not represent discrete values, but are rather the integral of an underlying infinite-resolution noise over the pixel area. Additionally, we propose a carefully tailored transport method that uses \int-noise to accurately advect noise samples over a sequence of frames, maximizing the correlation between different frames while also preserving the noise properties. Our results demonstrate that the proposed \int-noise can be used for a variety of tasks, such as video restoration, surrogate rendering, and conditional video generation. Seethis https URLfor video results.

View on arXiv
@article{chang2025_2504.03072,
  title={ How I Warped Your Noise: a Temporally-Correlated Noise Prior for Diffusion Models },
  author={ Pascal Chang and Jingwei Tang and Markus Gross and Vinicius C. Azevedo },
  journal={arXiv preprint arXiv:2504.03072},
  year={ 2025 }
}
Comments on this paper