Spiking Point Transformer for Point Cloud Classification

Spiking Neural Networks (SNNs) offer an attractive and energy-efficient alternative to conventional Artificial Neural Networks (ANNs) due to their sparse binary activation. When SNN meets Transformer, it shows great potential in 2D image processing. However, their application for 3D point cloud remains underexplored. To this end, we present Spiking Point Transformer (SPT), the first transformer-based SNN framework for point cloud classification. Specifically, we first design Queue-Driven Sampling Direct Encoding for point cloud to reduce computational costs while retaining the most effective support points at each time step. We introduce the Hybrid Dynamics Integrate-and-Fire Neuron (HD-IF), designed to simulate selective neuron activation and reduce over-reliance on specific artificial neurons. SPT attains state-of-the-art results on three benchmark datasets that span both real-world and synthetic datasets in the SNN domain. Meanwhile, the theoretical energy consumption of SPT is at least 6.4 less than its ANN counterpart.
View on arXiv@article{wu2025_2502.15811, title={ Spiking Point Transformer for Point Cloud Classification }, author={ Peixi Wu and Bosong Chai and Hebei Li and Menghua Zheng and Yansong Peng and Zeyu Wang and Xuan Nie and Yueyi Zhang and Xiaoyan Sun }, journal={arXiv preprint arXiv:2502.15811}, year={ 2025 } }