41
0

HyLiFormer: Hyperbolic Linear Attention for Skeleton-based Human Action Recognition

Abstract

Transformers have demonstrated remarkable performance in skeleton-based human action recognition, yet their quadratic computational complexity remains a bottleneck for real-world applications. To mitigate this, linear attention mechanisms have been explored but struggle to capture the hierarchical structure of skeleton data. Meanwhile, the Poincaré model, as a typical hyperbolic geometry, offers a powerful framework for modeling hierarchical structures but lacks well-defined operations for existing mainstream linear attention. In this paper, we propose HyLiFormer, a novel hyperbolic linear attention Transformer tailored for skeleton-based action recognition. Our approach incorporates a Hyperbolic Transformation with Curvatures (HTC) module to map skeleton data into hyperbolic space and a Hyperbolic Linear Attention (HLA) module for efficient long-range dependency modeling. Theoretical analysis and extensive experiments on NTU RGB+D and NTU RGB+D 120 datasets demonstrate that HyLiFormer significantly reduces computational complexity while preserving model accuracy, making it a promising solution for efficiency-critical applications.

View on arXiv
@article{li2025_2502.05869,
  title={ HyLiFormer: Hyperbolic Linear Attention for Skeleton-based Human Action Recognition },
  author={ Yue Li and Haoxuan Qu and Mengyuan Liu and Jun Liu and Yujun Cai },
  journal={arXiv preprint arXiv:2502.05869},
  year={ 2025 }
}
Comments on this paper