ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.12959
22
0

Rethinking Temporal Fusion with a Unified Gradient Descent View for 3D Semantic Occupancy Prediction

17 April 2025
Dubing Chen
Huan Zheng
Jin Fang
Xingping Dong
Xianfei Li
Wenlong Liao
Tao He
Pai Peng
Jianbing Shen
ArXivPDFHTML
Abstract

We present GDFusion, a temporal fusion method for vision-based 3D semantic occupancy prediction (VisionOcc). GDFusion opens up the underexplored aspects of temporal fusion within the VisionOcc framework, focusing on both temporal cues and fusion strategies. It systematically examines the entire VisionOcc pipeline, identifying three fundamental yet previously overlooked temporal cues: scene-level consistency, motion calibration, and geometric complementation. These cues capture diverse facets of temporal evolution and make distinct contributions across various modules in the VisionOcc framework. To effectively fuse temporal signals across heterogeneous representations, we propose a novel fusion strategy by reinterpreting the formulation of vanilla RNNs. This reinterpretation leverages gradient descent on features to unify the integration of diverse temporal information, seamlessly embedding the proposed temporal cues into the network. Extensive experiments on nuScenes demonstrate that GDFusion significantly outperforms established baselines. Notably, on Occ3D benchmark, it achieves 1.4\%-4.8\% mIoU improvements and reduces memory consumption by 27\%-72\%.

View on arXiv
@article{chen2025_2504.12959,
  title={ Rethinking Temporal Fusion with a Unified Gradient Descent View for 3D Semantic Occupancy Prediction },
  author={ Dubing Chen and Huan Zheng and Jin Fang and Xingping Dong and Xianfei Li and Wenlong Liao and Tao He and Pai Peng and Jianbing Shen },
  journal={arXiv preprint arXiv:2504.12959},
  year={ 2025 }
}
Comments on this paper