MS23D: A 3D Object Detection Method Using Multi-Scale Semantic Feature
Points to Construct 3D Feature Layer
- 3DPC
Lidar point clouds, as a type of data with accurate distance perception, can effectively represent the motion and posture of objects in three-dimensional space. However, the sparsity and disorderliness of point clouds make it challenging to extract features directly from them. Many studies have addressed this issue by transforming point clouds into regular voxel representations. However, the sparsity of point clouds poses challenges in effectively aggregating features within a 3D feature layer using voxel-based two-stage methods. To mitigate these issues, we propose a two-stage 3D detection framework named MS3D in this paper. Within MS3D, a novel approach is introduced to construct a 3D feature layer using multi-scale semantic feature points, effectively converting the sparse 3D feature layer into a more compact representation. Additionally, we predict the offset between the feature points in the 3D feature layer and the object's centroid, aiming to position the feature points as close to the object's center as possible. This method significantly enhances the efficiency of feature aggregation. Voxel-based methods often result in the loss of fine-grained local feature information during downsampling. By leveraging voxel encoding at different scales, we acquire feature information with varying receptive fields, mitigating the deficiency of fine-grained feature information to some extent. To validate the effectiveness of our approach, we conducted evaluations on both the KITTI dataset and the ONCE dataset.
View on arXiv