513
v1v2v3 (latest)

MSTA3D: Multi-scale Twin-attention for 3D Instance Segmentation

ACM Multimedia (MM), 2024
Main:10 Pages
9 Figures
Bibliography:3 Pages
8 Tables
Appendix:1 Pages
Abstract

Recently, transformer-based techniques incorporating superpoints have become prevalent in 3D instance segmentation. However, they often encounter an over-segmentation problem, especially noticeable with large objects. Additionally, unreliable mask predictions stemming from superpoint mask prediction further compound this issue. To address these challenges, we propose a novel framework called MSTA3D. It leverages multi-scale feature representation and introduces a twin-attention mechanism to effectively capture them. Furthermore, MSTA3D integrates a box query with a box regularizer, offering a complementary spatial constraint alongside semantic queries. Experimental evaluations on ScanNetV2, ScanNet200 and S3DIS datasets demonstrate that our approach surpasses state-of-the-art 3D instance segmentation methods.

View on arXiv
Comments on this paper