ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.00951
35
1

Dynamical Diffusion: Learning Temporal Dynamics with Diffusion Models

2 March 2025
Xingzhuo Guo
Yu Zhang
Baixu Chen
Haoran Xu
J. Z. Wang
Mingsheng Long
    DiffM
    AI4TS
ArXivPDFHTML
Abstract

Diffusion models have emerged as powerful generative frameworks by progressively adding noise to data through a forward process and then reversing this process to generate realistic samples. While these models have achieved strong performance across various tasks and modalities, their application to temporal predictive learning remains underexplored. Existing approaches treat predictive learning as a conditional generation problem, but often fail to fully exploit the temporal dynamics inherent in the data, leading to challenges in generating temporally coherent sequences. To address this, we introduce Dynamical Diffusion (DyDiff), a theoretically sound framework that incorporates temporally aware forward and reverse processes. Dynamical Diffusion explicitly models temporal transitions at each diffusion step, establishing dependencies on preceding states to better capture temporal dynamics. Through the reparameterization trick, Dynamical Diffusion achieves efficient training and inference similar to any standard diffusion model. Extensive experiments across scientific spatiotemporal forecasting, video prediction, and time series forecasting demonstrate that Dynamical Diffusion consistently improves performance in temporal predictive tasks, filling a crucial gap in existing methodologies. Code is available at this repository:this https URL.

View on arXiv
@article{guo2025_2503.00951,
  title={ Dynamical Diffusion: Learning Temporal Dynamics with Diffusion Models },
  author={ Xingzhuo Guo and Yu Zhang and Baixu Chen and Haoran Xu and Jianmin Wang and Mingsheng Long },
  journal={arXiv preprint arXiv:2503.00951},
  year={ 2025 }
}
Comments on this paper