ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.20785
70
0

Free4D: Tuning-free 4D Scene Generation with Spatial-Temporal Consistency

26 March 2025
T. Liu
Z. Huang
Zhaoxi Chen
Guangcong Wang
Shoukang Hu
Liao Shen
Huiqiang Sun
Z. Cao
Wei Li
Z. Liu
    VGen
    3DGS
ArXivPDFHTML
Abstract

We present Free4D, a novel tuning-free framework for 4D scene generation from a single image. Existing methods either focus on object-level generation, making scene-level generation infeasible, or rely on large-scale multi-view video datasets for expensive training, with limited generalization ability due to the scarcity of 4D scene data. In contrast, our key insight is to distill pre-trained foundation models for consistent 4D scene representation, which offers promising advantages such as efficiency and generalizability. 1) To achieve this, we first animate the input image using image-to-video diffusion models followed by 4D geometric structure initialization. 2) To turn this coarse structure into spatial-temporal consistent multiview videos, we design an adaptive guidance mechanism with a point-guided denoising strategy for spatial consistency and a novel latent replacement strategy for temporal coherence. 3) To lift these generated observations into consistent 4D representation, we propose a modulation-based refinement to mitigate inconsistencies while fully leveraging the generated information. The resulting 4D representation enables real-time, controllable rendering, marking a significant advancement in single-image-based 4D scene generation.

View on arXiv
@article{liu2025_2503.20785,
  title={ Free4D: Tuning-free 4D Scene Generation with Spatial-Temporal Consistency },
  author={ Tianqi Liu and Zihao Huang and Zhaoxi Chen and Guangcong Wang and Shoukang Hu and Liao Shen and Huiqiang Sun and Zhiguo Cao and Wei Li and Ziwei Liu },
  journal={arXiv preprint arXiv:2503.20785},
  year={ 2025 }
}
Comments on this paper