In this work, we present GPDiT, a Generative Pre-trained Autoregressive Diffusion Transformer that unifies the strengths of diffusion and autoregressive modeling for long-range video synthesis, within a continuous latent space. Instead of predicting discrete tokens, GPDiT autoregressively predicts future latent frames using a diffusion loss, enabling natural modeling of motion dynamics and semantic consistency across frames. This continuous autoregressive framework not only enhances generation quality but also endows the model with representation capabilities. Additionally, we introduce a lightweight causal attention variant and a parameter-free rotation-based time-conditioning mechanism, improving both the training and inference efficiency. Extensive experiments demonstrate that GPDiT achieves strong performance in video generation quality, video representation ability, and few-shot learning tasks, highlighting its potential as an effective framework for video modeling in continuous space.
View on arXiv@article{zhang2025_2505.07344, title={ Generative Pre-trained Autoregressive Diffusion Transformer }, author={ Yuan Zhang and Jiacheng Jiang and Guoqing Ma and Zhiying Lu and Haoyang Huang and Jianlong Yuan and Nan Duan }, journal={arXiv preprint arXiv:2505.07344}, year={ 2025 } }