ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.08361
40
0

SN-LiDAR: Semantic Neural Fields for Novel Space-time View LiDAR Synthesis

11 April 2025
Yi Chen
Tianchen Deng
Wentao Zhao
Xiaoning Wang
Wenqian Xi
Weidong Chen
Jingchuan Wang
    3DPC
    3DV
ArXivPDFHTML
Abstract

Recent research has begun exploring novel view synthesis (NVS) for LiDAR point clouds, aiming to generate realistic LiDAR scans from unseen viewpoints. However, most existing approaches do not reconstruct semantic labels, which are crucial for many downstream applications such as autonomous driving and robotic perception. Unlike images, which benefit from powerful segmentation models, LiDAR point clouds lack such large-scale pre-trained models, making semantic annotation time-consuming and labor-intensive. To address this challenge, we propose SN-LiDAR, a method that jointly performs accurate semantic segmentation, high-quality geometric reconstruction, and realistic LiDAR synthesis. Specifically, we employ a coarse-to-fine planar-grid feature representation to extract global features from multi-frame point clouds and leverage a CNN-based encoder to extract local semantic features from the current frame point cloud. Extensive experiments on SemanticKITTI and KITTI-360 demonstrate the superiority of SN-LiDAR in both semantic and geometric reconstruction, effectively handling dynamic objects and large-scale scenes. Codes will be available onthis https URL.

View on arXiv
@article{chen2025_2504.08361,
  title={ SN-LiDAR: Semantic Neural Fields for Novel Space-time View LiDAR Synthesis },
  author={ Yi Chen and Tianchen Deng and Wentao Zhao and Xiaoning Wang and Wenqian Xi and Weidong Chen and Jingchuan Wang },
  journal={arXiv preprint arXiv:2504.08361},
  year={ 2025 }
}
Comments on this paper