20

Mode Seeking meets Mean Seeking for Fast Long Video Generation

Shengqu Cai
Weili Nie
Chao Liu
Julius Berner
Lvmin Zhang
Nanye Ma
Hansheng Chen
Maneesh Agrawala
Leonidas Guibas
Gordon Wetzstein
Arash Vahdat
Main:8 Pages
4 Figures
Bibliography:4 Pages
2 Tables
Appendix:2 Pages
Abstract

Scaling video generation from seconds to minutes faces a critical bottleneck: while short-video data is abundant and high-fidelity, coherent long-form data is scarce and limited to narrow domains. To address this, we propose a training paradigm where Mode Seeking meets Mean Seeking, decoupling local fidelity from long-term coherence based on a unified representation via a Decoupled Diffusion Transformer. Our approach utilizes a global Flow Matching head trained via supervised learning on long videos to capture narrative structure, while simultaneously employing a local Distribution Matching head that aligns sliding windows to a frozen short-video teacher via a mode-seeking reverse-KL divergence. This strategy enables the synthesis of minute-scale videos that learns long-range coherence and motions from limited long videos via supervised flow matching, while inheriting local realism by aligning every sliding-window segment of the student to a frozen short-video teacher, resulting in a few-step fast long video generator. Evaluations show that our method effectively closes the fidelity-horizon gap by jointly improving local sharpness, motion and long-range consistency. Project website:this https URL.

View on arXiv
Comments on this paper