ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.01003
19
0

3D Human Pose Estimation via Spatial Graph Order Attention and Temporal Body Aware Transformer

2 May 2025
Kamel Aouaidjia
Aofan Li
Wenhao Zhang
Chongsheng Zhang
    ViT
ArXivPDFHTML
Abstract

Nowadays, Transformers and Graph Convolutional Networks (GCNs) are the prevailing techniques for 3D human pose estimation. However, Transformer-based methods either ignore the spatial neighborhood relationships between the joints when used for skeleton representations or disregard the local temporal patterns of the local joint movements in skeleton sequence modeling, while GCN-based methods often neglect the need for pose-specific representations. To address these problems, we propose a new method that exploits the graph modeling capability of GCN to represent each skeleton with multiple graphs of different orders, incorporated with a newly introduced Graph Order Attention module that dynamically emphasizes the most representative orders for each joint. The resulting spatial features of the sequence are further processed using a proposed temporal Body Aware Transformer that models the global body feature dependencies in the sequence with awareness of the local inter-skeleton feature dependencies of joints. Given that our 3D pose output aligns with the central 2D pose in the sequence, we improve the self-attention mechanism to be aware of the central pose while diminishing its focus gradually towards the first and the last poses. Extensive experiments on Human3.6m, MPIINF-3DHP, and HumanEva-I datasets demonstrate the effectiveness of the proposed method. Code and models are made available on Github.

View on arXiv
@article{aouaidjia2025_2505.01003,
  title={ 3D Human Pose Estimation via Spatial Graph Order Attention and Temporal Body Aware Transformer },
  author={ Kamel Aouaidjia and Aofan Li and Wenhao Zhang and Chongsheng Zhang },
  journal={arXiv preprint arXiv:2505.01003},
  year={ 2025 }
}
Comments on this paper