ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.16134
26
1

Diffusion Transformer Captures Spatial-Temporal Dependencies: A Theory for Gaussian Process Data

23 July 2024
Hengyu Fu
Zehao Dou
Jiawei Guo
Mengdi Wang
Minshuo Chen
ArXivPDFHTML
Abstract

Diffusion Transformer, the backbone of Sora for video generation, successfully scales the capacity of diffusion models, pioneering new avenues for high-fidelity sequential data generation. Unlike static data such as images, sequential data consists of consecutive data frames indexed by time, exhibiting rich spatial and temporal dependencies. These dependencies represent the underlying dynamic model and are critical to validate the generated data. In this paper, we make the first theoretical step towards bridging diffusion transformers for capturing spatial-temporal dependencies. Specifically, we establish score approximation and distribution estimation guarantees of diffusion transformers for learning Gaussian process data with covariance functions of various decay patterns. We highlight how the spatial-temporal dependencies are captured and affect learning efficiency. Our study proposes a novel transformer approximation theory, where the transformer acts to unroll an algorithm. We support our theoretical results by numerical experiments, providing strong evidence that spatial-temporal dependencies are captured within attention layers, aligning with our approximation theory.

View on arXiv
@article{fu2025_2407.16134,
  title={ Diffusion Transformer Captures Spatial-Temporal Dependencies: A Theory for Gaussian Process Data },
  author={ Hengyu Fu and Zehao Dou and Jiawei Guo and Mengdi Wang and Minshuo Chen },
  journal={arXiv preprint arXiv:2407.16134},
  year={ 2025 }
}
Comments on this paper