ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2510.13198
44
0

Complementary Information Guided Occupancy Prediction via Multi-Level Representation Fusion

IEEE International Conference on Robotics and Automation (ICRA), 2025
15 October 2025
Rongtao Xu
Jinzhou Lin
Jialei Zhou
Jiahua Dong
Changwei Wang
Ruisheng Wang
Li Guo
Shibiao Xu
Xiaodan Liang
    3DPC
ArXiv (abs)PDFHTML
Main:6 Pages
3 Figures
Bibliography:2 Pages
7 Tables
Abstract

Camera-based occupancy prediction is a mainstream approach for 3D perception in autonomous driving, aiming to infer complete 3D scene geometry and semantics from 2D images. Almost existing methods focus on improving performance through structural modifications, such as lightweight backbones and complex cascaded frameworks, with good yet limited performance. Few studies explore from the perspective of representation fusion, leaving the rich diversity of features in 2D images underutilized. Motivated by this, we propose \textbf{CIGOcc, a two-stage occupancy prediction framework based on multi-level representation fusion. \textbf{CIGOcc extracts segmentation, graphics, and depth features from an input image and introduces a deformable multi-level fusion mechanism to fuse these three multi-level features. Additionally, CIGOcc incorporates knowledge distilled from SAM to further enhance prediction accuracy. Without increasing training costs, CIGOcc achieves state-of-the-art performance on the SemanticKITTI benchmark. The code is provided in the supplementary material and will be releasedthis https URL

View on arXiv
Comments on this paper