ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.10783
75
2

Video Diffusion Transformers are In-Context Learners

14 December 2024
Zhengcong Fei
Di Qiu
Changqian Yu
Debang Li
Mingyuan Fan
    VGen
    DiffM
ArXivPDFHTML
Abstract

This paper investigates a solution for enabling in-context capabilities of video diffusion transformers, with minimal tuning required for activation. Specifically, we propose a simple pipeline to leverage in-context generation: (i\textbf{i}i) concatenate videos along spacial or time dimension, (ii\textbf{ii}ii) jointly caption multi-scene video clips from one source, and (iii\textbf{iii}iii) apply task-specific fine-tuning using carefully curated small datasets. Through a series of diverse controllable tasks, we demonstrate qualitatively that existing advanced text-to-video models can effectively perform in-context generation. Notably, it allows for the creation of consistent multi-scene videos exceeding 30 seconds in duration, without additional computational overhead. Importantly, this method requires no modifications to the original models, results in high-fidelity video outputs that better align with prompt specifications and maintain role consistency. Our framework presents a valuable tool for the research community and offers critical insights for advancing product-level controllable video generation systems. The data, code, and model weights are publicly available at:this https URL.

View on arXiv
@article{fei2025_2412.10783,
  title={ Video Diffusion Transformers are In-Context Learners },
  author={ Zhengcong Fei and Di Qiu and Debang Li and Changqian Yu and Mingyuan Fan },
  journal={arXiv preprint arXiv:2412.10783},
  year={ 2025 }
}
Comments on this paper