ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.20784
61
0

FB-4D: Spatial-Temporal Coherent Dynamic 3D Content Generation with Feature Banks

26 March 2025
Jinwei Li
Huan-ang Gao
Wenyi Li
Haohan Chi
Chenyu Liu
Chenxi Du
Y. Liu
Mingju Gao
Guiyu Zhang
Zongzheng Zhang
Li Yi
Yao Yao
Jingwei Zhao
Hongyang Li
Yikai Wang
Hao Zhao
ArXivPDFHTML
Abstract

With the rapid advancements in diffusion models and 3D generation techniques, dynamic 3D content generation has become a crucial research area. However, achieving high-fidelity 4D (dynamic 3D) generation with strong spatial-temporal consistency remains a challenging task. Inspired by recent findings that pretrained diffusion features capture rich correspondences, we propose FB-4D, a novel 4D generation framework that integrates a Feature Bank mechanism to enhance both spatial and temporal consistency in generated frames. In FB-4D, we store features extracted from previous frames and fuse them into the process of generating subsequent frames, ensuring consistent characteristics across both time and multiple views. To ensure a compact representation, the Feature Bank is updated by a proposed dynamic merging mechanism. Leveraging this Feature Bank, we demonstrate for the first time that generating additional reference sequences through multiple autoregressive iterations can continuously improve generation performance. Experimental results show that FB-4D significantly outperforms existing methods in terms of rendering quality, spatial-temporal consistency, and robustness. It surpasses all multi-view generation tuning-free approaches by a large margin and achieves performance on par with training-based methods.

View on arXiv
@article{li2025_2503.20784,
  title={ FB-4D: Spatial-Temporal Coherent Dynamic 3D Content Generation with Feature Banks },
  author={ Jinwei Li and Huan-ang Gao and Wenyi Li and Haohan Chi and Chenyu Liu and Chenxi Du and Yiqian Liu and Mingju Gao and Guiyu Zhang and Zongzheng Zhang and Li Yi and Yao Yao and Jingwei Zhao and Hongyang Li and Yikai Wang and Hao Zhao },
  journal={arXiv preprint arXiv:2503.20784},
  year={ 2025 }
}
Comments on this paper