ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.06022
28
0

CamContextI2V: Context-aware Controllable Video Generation

8 April 2025
Luis Denninger
Sina Mokhtarzadeh Azar
Juergen Gall
    VGen
ArXivPDFHTML
Abstract

Recently, image-to-video (I2V) diffusion models have demonstrated impressive scene understanding and generative quality, incorporating image conditions to guide generation. However, these models primarily animate static images without extending beyond their provided context. Introducing additional constraints, such as camera trajectories, can enhance diversity but often degrades visual quality, limiting their applicability for tasks requiring faithful scene representation. We propose CamContextI2V, an I2V model that integrates multiple image conditions with 3D constraints alongside camera control to enrich both global semantics and fine-grained visual details. This enables more coherent and context-aware video generation. Moreover, we motivate the necessity of temporal awareness for an effective context representation. Our comprehensive study on the RealEstate10K dataset demonstrates improvements in visual quality and camera controllability. We make our code and models publicly available at:this https URL.

View on arXiv
@article{denninger2025_2504.06022,
  title={ CamContextI2V: Context-aware Controllable Video Generation },
  author={ Luis Denninger and Sina Mokhtarzadeh Azar and Juergen Gall },
  journal={arXiv preprint arXiv:2504.06022},
  year={ 2025 }
}
Comments on this paper