SARGes: Semantically Aligned Reliable Gesture Generation via Intent Chain

Co-speech gesture generation enhances human-computer interaction realism through speech-synchronized gesture synthesis. However, generating semantically meaningful gestures remains a challenging problem. We propose SARGes, a novel framework that leverages large language models (LLMs) to parse speech content and generate reliable semantic gesture labels, which subsequently guide the synthesis of meaningful co-speechthis http URL, we constructed a comprehensive co-speech gesture ethogram and developed an LLM-based intent chain reasoning mechanism that systematically parses and decomposes gesture semantics into structured inference steps following ethogram criteria, effectively guiding LLMs to generate context-aware gesture labels. Subsequently, we constructed an intent chain-annotated text-to-gesture label dataset and trained a lightweight gesture label generation model, which then guides the generation of credible and semantically coherent co-speech gestures. Experimental results demonstrate that SARGes achieves highly semantically-aligned gesture labeling (50.2% accuracy) with efficient single-pass inference (0.4 seconds). The proposed method provides an interpretable intent reasoning pathway for semantic gesture synthesis.
View on arXiv@article{gao2025_2503.20202, title={ SARGes: Semantically Aligned Reliable Gesture Generation via Intent Chain }, author={ Nan Gao and Yihua Bao and Dongdong Weng and Jiayi Zhao and Jia Li and Yan Zhou and Pengfei Wan and Di Zhang }, journal={arXiv preprint arXiv:2503.20202}, year={ 2025 } }