29
0

Scaling Large Motion Models with Million-Level Human Motions

Abstract

Inspired by the recent success of LLMs, the field of human motion understanding has increasingly shifted toward developing large motion models. Despite some progress, current efforts remain far from achieving truly generalist models, primarily due to the lack of massive high-quality data. To address this gap, we present MotionLib, the first million-level dataset for motion generation, which is at least 15×\times larger than existing counterparts and enriched with hierarchical text descriptions. Using MotionLib, we train a large motion model named Being-M0, demonstrating robust performance across a wide range of human activities, including unseen ones. Through systematic investigation, for the first time, we highlight the importance of scaling both data and model size for advancing motion generation, along with key insights to achieve this goal. To better integrate the motion modality, we propose Motionbook, an innovative motion encoding approach including (1) a compact yet lossless feature to represent motions; (2) a novel 2D lookup-free motion tokenizer that preserves fine-grained motion details while expanding codebook capacity, significantly enhancing the representational power of motion tokens. We believe this work lays the groundwork for developing more versatile and powerful motion generation models in the future. For further details, visitthis https URL.

View on arXiv
@article{wang2025_2410.03311,
  title={ Scaling Large Motion Models with Million-Level Human Motions },
  author={ Ye Wang and Sipeng Zheng and Bin Cao and Qianshan Wei and Weishuai Zeng and Qin Jin and Zongqing Lu },
  journal={arXiv preprint arXiv:2410.03311},
  year={ 2025 }
}
Comments on this paper