ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.04989
41
13

SG-I2V: Self-Guided Trajectory Control in Image-to-Video Generation

7 November 2024
Koichi Namekata
Sherwin Bahmani
Ziyi Wu
Yash Kant
Igor Gilitschenski
David B. Lindell
    VGen
ArXivPDFHTML
Abstract

Methods for image-to-video generation have achieved impressive, photo-realistic quality. However, adjusting specific elements in generated videos, such as object motion or camera movement, is often a tedious process of trial and error, e.g., involving re-generating videos with different random seeds. Recent techniques address this issue by fine-tuning a pre-trained model to follow conditioning signals, such as bounding boxes or point trajectories. Yet, this fine-tuning procedure can be computationally expensive, and it requires datasets with annotated object motion, which can be difficult to procure. In this work, we introduce SG-I2V, a framework for controllable image-to-video generation that is self-guided\unicodex2013\unicode{x2013}\unicodex2013offering zero-shot control by relying solely on the knowledge present in a pre-trained image-to-video diffusion model without the need for fine-tuning or external knowledge. Our zero-shot method outperforms unsupervised baselines while significantly narrowing down the performance gap with supervised models in terms of visual quality and motion fidelity. Additional details and video results are available on our project page:this https URL

View on arXiv
@article{namekata2025_2411.04989,
  title={ SG-I2V: Self-Guided Trajectory Control in Image-to-Video Generation },
  author={ Koichi Namekata and Sherwin Bahmani and Ziyi Wu and Yash Kant and Igor Gilitschenski and David B. Lindell },
  journal={arXiv preprint arXiv:2411.04989},
  year={ 2025 }
}
Comments on this paper