Occlusion-aware Text-Image-Point Cloud Pretraining for Open-World 3D Object Recognition

Recent open-world representation learning approaches have leveraged CLIP to enable zero-shot 3D object recognition. However, performance on real point clouds with occlusions still falls short due to unrealistic pretraining settings. Additionally, these methods incur high inference costs because they rely on Transformer's attention modules. In this paper, we make two contributions to address these limitations. First, we propose occlusion-aware text-image-point cloud pretraining to reduce the training-testing domain gap. From 52K synthetic 3D objects, our framework generates nearly 630K partial point clouds for pretraining, consistently improving real-world recognition performances of existing popular 3D networks. Second, to reduce computational requirements, we introduce DuoMamba, a two-stream linear state space model tailored for point clouds. By integrating two space-filling curves with 1D convolutions, DuoMamba effectively models spatial dependencies between point tokens, offering a powerful alternative to Transformer. When pretrained with our framework, DuoMamba surpasses current state-of-the-art methods while reducing latency and FLOPs, highlighting the potential of our approach for real-world applications. Our code and data are available atthis https URL.
View on arXiv@article{nguyen2025_2502.10674, title={ Occlusion-aware Text-Image-Point Cloud Pretraining for Open-World 3D Object Recognition }, author={ Khanh Nguyen and Ghulam Mubashar Hassan and Ajmal Mian }, journal={arXiv preprint arXiv:2502.10674}, year={ 2025 } }