ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.05075
36
0

PvNeXt: Rethinking Network Design and Temporal Motion for Point Cloud Video Recognition

7 April 2025
Jie Wang
Tingfa Xu
Lihe Ding
Xinjie Zhang
Long Bai
Jianan Li
    3DPC
ArXivPDFHTML
Abstract

Point cloud video perception has become an essential task for the realm of 3D vision. Current 4D representation learning techniques typically engage in iterative processing coupled with dense query operations. Although effective in capturing temporal features, this approach leads to substantial computational redundancy. In this work, we propose a framework, named as PvNeXt, for effective yet efficient point cloud video recognition, via personalized one-shot query operation. Specially, PvNeXt consists of two key modules, the Motion Imitator and the Single-Step Motion Encoder. The former module, the Motion Imitator, is designed to capture the temporal dynamics inherent in sequences of point clouds, thus generating the virtual motion corresponding to each frame. The Single-Step Motion Encoder performs a one-step query operation, associating point cloud of each frame with its corresponding virtual motion frame, thereby extracting motion cues from point cloud sequences and capturing temporal dynamics across the entire sequence. Through the integration of these two modules, {PvNeXt} enables personalized one-shot queries for each frame, effectively eliminating the need for frame-specific looping and intensive query processes. Extensive experiments on multiple benchmarks demonstrate the effectiveness of our method.

View on arXiv
@article{wang2025_2504.05075,
  title={ PvNeXt: Rethinking Network Design and Temporal Motion for Point Cloud Video Recognition },
  author={ Jie Wang and Tingfa Xu and Lihe Ding and Xinjie Zhang and Long Bai and Jianan Li },
  journal={arXiv preprint arXiv:2504.05075},
  year={ 2025 }
}
Comments on this paper