ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20941
45
1

PMA: Towards Parameter-Efficient Point Cloud Understanding via Point Mamba Adapter

27 May 2025
Yaohua Zha
Yanzi Wang
Hang Guo
Jinpeng Wang
Tao Dai
Bin Chen
Zhihao Ouyang
Xue Yuerong
Ke Chen
Shu-Tao Xia
ArXivPDFHTML
Abstract

Applying pre-trained models to assist point cloud understanding has recently become a mainstream paradigm in 3D perception. However, existing application strategies are straightforward, utilizing only the final output of the pre-trained model for various task heads. It neglects the rich complementary information in the intermediate layer, thereby failing to fully unlock the potential of pre-trained models. To overcome this limitation, we propose an orthogonal solution: Point Mamba Adapter (PMA), which constructs an ordered feature sequence from all layers of the pre-trained model and leverages Mamba to fuse all complementary semantics, thereby promoting comprehensive point cloud understanding. Constructing this ordered sequence is non-trivial due to the inherent isotropy of 3D space. Therefore, we further propose a geometry-constrained gate prompt generator (G2PG) shared across different layers, which applies shared geometric constraints to the output gates of the Mamba and dynamically optimizes the spatial order, thus enabling more effective integration of multi-layer information. Extensive experiments conducted on challenging point cloud datasets across various tasks demonstrate that our PMA elevates the capability for point cloud understanding to a new level by fusing diverse complementary intermediate features. Code is available atthis https URL.

View on arXiv
@article{zha2025_2505.20941,
  title={ PMA: Towards Parameter-Efficient Point Cloud Understanding via Point Mamba Adapter },
  author={ Yaohua Zha and Yanzi Wang and Hang Guo and Jinpeng Wang and Tao Dai and Bin Chen and Zhihao Ouyang and Xue Yuerong and Ke Chen and Shu-Tao Xia },
  journal={arXiv preprint arXiv:2505.20941},
  year={ 2025 }
}
Comments on this paper