952
v1v2v3 (latest)

Video Diffusion Transformers are In-Context Learners

Main:6 Pages
3 Figures
Bibliography:7 Pages
Abstract

This paper investigates a solution for enabling in-context capabilities of video diffusion transformers, with minimal tuning required for activation. Specifically, we propose a simple pipeline to leverage in-context generation: (i\textbf{i}) concatenate videos along spacial or time dimension, (ii\textbf{ii}) jointly caption multi-scene video clips from one source, and (iii\textbf{iii}) apply task-specific fine-tuning using carefully curated small datasets. Through a series of diverse controllable tasks, we demonstrate qualitatively that existing advanced text-to-video models can effectively perform in-context generation. Notably, it allows for the creation of consistent multi-scene videos exceeding 30 seconds in duration, without additional computational overhead. Importantly, this method requires no modifications to the original models, results in high-fidelity video outputs that better align with prompt specifications and maintain role consistency. Our framework presents a valuable tool for the research community and offers critical insights for advancing product-level controllable video generation systems. The data, code, and model weights are publicly available at:this https URL.

View on arXiv
Comments on this paper