ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.09151
52
0

Reangle-A-Video: 4D Video Generation as Video-to-Video Translation

12 March 2025
Hyeonho Jeong
Suhyeon Lee
Jong Chul Ye
    VGen
ArXivPDFHTML
Abstract

We introduce Reangle-A-Video, a unified framework for generating synchronized multi-view videos from a single input video. Unlike mainstream approaches that train multi-view video diffusion models on large-scale 4D datasets, our method reframes the multi-view video generation task as video-to-videos translation, leveraging publicly available image and video diffusion priors. In essence, Reangle-A-Video operates in two stages. (1) Multi-View Motion Learning: An image-to-video diffusion transformer is synchronously fine-tuned in a self-supervised manner to distill view-invariant motion from a set of warped videos. (2) Multi-View Consistent Image-to-Images Translation: The first frame of the input video is warped and inpainted into various camera perspectives under an inference-time cross-view consistency guidance using DUSt3R, generating multi-view consistent starting images. Extensive experiments on static view transport and dynamic camera control show that Reangle-A-Video surpasses existing methods, establishing a new solution for multi-view video generation. We will publicly release our code and data. Project page:this https URL

View on arXiv
@article{jeong2025_2503.09151,
  title={ Reangle-A-Video: 4D Video Generation as Video-to-Video Translation },
  author={ Hyeonho Jeong and Suhyeon Lee and Jong Chul Ye },
  journal={arXiv preprint arXiv:2503.09151},
  year={ 2025 }
}
Comments on this paper