Sign language recognition from skeletal data using graph and recurrent neural networks
B. Mederos
J. Mejía
A. Medina-Reyes
Y. Espinosa-Almeyda
J. D. Díaz-Roman
I. Rodríguez-Mederos
M. Mejía-Carreon
F. Gonzalez-Lopez
- SLR
Abstract
This work presents an approach for recognizing isolated sign language gestures using skeleton-based pose data extracted from video sequences. A Graph-GRU temporal network is proposed to model both spatial and temporal dependencies between frames, enabling accurate classification. The model is trained and evaluated on the AUTSL (Ankara university Turkish sign language) dataset, achieving high accuracy. Experimental results demonstrate the effectiveness of integrating graph-based spatial representations with temporal modeling, providing a scalable framework for sign language recognition. The results of this approach highlight the potential of pose-driven methods for sign language understanding.
View on arXivComments on this paper
