169
v1v2 (latest)

SparseWorld-TC: Trajectory-Conditioned Sparse Occupancy World Model

Jiayuan Du
Yiming Zhao
Zhenglong Guo
Yong Pan
Wenbo Hou
Zhihui Hao
Kun Zhan
Qijun Chen
Main:8 Pages
11 Figures
Bibliography:2 Pages
8 Tables
Appendix:4 Pages
Abstract

This paper introduces a novel architecture for trajectory-conditioned forecasting of future 3D scene occupancy. In contrast to methods that rely on variational autoencoders (VAEs) to generate discrete occupancy tokens, which inherently limit representational capacity, our approach predicts multi-frame future occupancy in an end-to-end manner directly from raw image features. Inspired by the success of attention-based transformer architectures in foundational vision and language models such as GPT and VGGT, we employ a sparse occupancy representation that bypasses the intermediate bird's eye view (BEV) projection and its explicit geometric priors. This design allows the transformer to capture spatiotemporal dependencies more effectively. By avoiding both the finite-capacity constraint of discrete tokenization and the structural limitations of BEV representations, our method achieves state-of-the-art performance on the nuScenes benchmark for 1-3 second occupancy forecasting, outperforming existing approaches by a significant margin. Furthermore, it demonstrates robust scene dynamics understanding, consistently delivering high accuracy under arbitrary future trajectory conditioning.

View on arXiv
Comments on this paper