221

SequentialPointNet: A strong frame-level parallel point cloud sequence network for 3D action recognition

Xing Li
Qian Huang
Tianjin Yang
Abstract

The point cloud sequence of 3D human actions consists of a set of ordered point cloud frames. Compared to static point clouds, point cloud sequences have huge data sizes proportional to the time dimension. Therefore, developing an efficient and lightweight point cloud sequence model is pivotal for 3D action recognition. In this paper, we propose a strong frame-level parallel point cloud sequence network referred to as SequentialPointNet for 3D action recognition. The key to our approach is to divide the main modeling operations into frame-level units executed in parallel, which greatly improves the efficiency of modeling point cloud sequences.Moreover, we propose to flatten the point cloud sequence into a new point data type named hyperpoint sequence that preserves the complete spatial structure of each frame. Then, a novel Hyperpoint-Mixer module is introduced to mix intra-frame spatial features and inter-frame temporal features of the hyperpoint sequence. By doing so, SequentialPointNet maximizes the appearance encoding ability and extracts sufficient motion information for effective human action recognition. Extensive experiments show that SequentialPointNet achieves up to 10X faster than existing point cloud sequence models. Additionally, our SequentialPointNet surpasses state-of-the-art approaches for human action recognition on both large-scale datasets (i.e., NTU RGB+D 60 and NTU RGB+D 120) and small-scale datasets (i.e., MSR Action3D and UTD-MHAD).

View on arXiv
Comments on this paper