26
0

GTransPDM: A Graph-embedded Transformer with Positional Decoupling for Pedestrian Crossing Intention Prediction

Abstract

Understanding and predicting pedestrian crossing behavioral intention is crucial for the driving safety of autonomous vehicles. Nonetheless, challenges emerge when using promising images or environmental context masks to extract various factors for time-series network modeling, causing pre-processing errors or a loss of efficiency. Typically, pedestrian positions captured by onboard cameras are often distorted and do not accurately reflect their actual movements. To address these issues, GTransPDM -- a Graph-embedded Transformer with a Position Decoupling Module -- was developed for pedestrian crossing intention prediction by leveraging multi-modal features. First, a positional decoupling module was proposed to decompose pedestrian lateral motion and encode depth cues in the image view. Then, a graph-embedded Transformer was designed to capture the spatio-temporal dynamics of human pose skeletons, integrating essential factors such as position, skeleton, and ego-vehicle motion. Experimental results indicate that the proposed method achieves 92% accuracy on the PIE dataset and 87% accuracy on the JAAD dataset, with a processing speed of 0.05ms. It outperforms the state-of-the-art in comparison.

View on arXiv
@article{xie2025_2409.20223,
  title={ GTransPDM: A Graph-embedded Transformer with Positional Decoupling for Pedestrian Crossing Intention Prediction },
  author={ Chen Xie and Ciyun Lin and Xiaoyu Zheng and Bowen Gong and Antonio M. López },
  journal={arXiv preprint arXiv:2409.20223},
  year={ 2025 }
}
Comments on this paper