44
1

SemGeoMo: Dynamic Contextual Human Motion Generation with Semantic and Geometric Guidance

Abstract

Generating reasonable and high-quality human interactive motions in a given dynamic environment is crucial for understanding, modeling, transferring, and applying human behaviors to both virtual and physical robots. In this paper, we introduce an effective method, SemGeoMo, for dynamic contextual human motion generation, which fully leverages the text-affordance-joint multi-level semantic and geometric guidance in the generation process, improving the semantic rationality and geometric correctness of generative motions. Our method achieves state-of-the-art performance on three datasets and demonstrates superior generalization capability for diverse interaction scenarios. The project page and code can be found atthis https URL.

View on arXiv
@article{cong2025_2503.01291,
  title={ SemGeoMo: Dynamic Contextual Human Motion Generation with Semantic and Geometric Guidance },
  author={ Peishan Cong and Ziyi Wang and Yuexin Ma and Xiangyu Yue },
  journal={arXiv preprint arXiv:2503.01291},
  year={ 2025 }
}
Comments on this paper