A Spatiotemporal Approach to Tri-Perspective Representation for 3D Semantic Occupancy Prediction

Holistic understanding and reasoning in 3D scenes are crucial for the success of autonomous driving systems. The evolution of 3D semantic occupancy prediction as a pretraining task for autonomous driving and robotic applications captures finer 3D details compared to traditional 3D detection methods. Vision-based 3D semantic occupancy prediction is increasingly overlooked in favor of LiDAR-based approaches, which have shown superior performance in recent years. However, we present compelling evidence that there is still potential for enhancing vision-based methods. Existing approaches predominantly focus on spatial cues such as tri-perspective view (TPV) embeddings, often overlooking temporal cues. This study introduces S2TPVFormer, a spatiotemporal transformer architecture designed to predict temporally coherent 3D semantic occupancy. By introducing temporal cues through a novel Temporal Cross-View Hybrid Attention mechanism (TCVHA), we generate Spatiotemporal TPV (S2TPV) embeddings that enhance the prior process. Experimental evaluations on the nuScenes dataset demonstrate a significant +4.1% of absolute gain in mean Intersection over Union (mIoU) for 3D semantic occupancy compared to baseline TPVFormer, validating the effectiveness of S2TPVFormer in advancing 3D scene perception.
View on arXiv@article{silva2025_2401.13785, title={ A Spatiotemporal Approach to Tri-Perspective Representation for 3D Semantic Occupancy Prediction }, author={ Sathira Silva and Savindu Bhashitha Wannigama and Gihan Jayatilaka and Muhammad Haris Khan and Roshan Ragel }, journal={arXiv preprint arXiv:2401.13785}, year={ 2025 } }