ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.13686
13
99

HuMMan: Multi-Modal 4D Human Dataset for Versatile Sensing and Modeling

28 April 2022
Zhongang Cai
Daxuan Ren
Ailing Zeng
Zhengyu Lin
Tao Yu
Wenjia Wang
Xiangyu Fan
Yangmin Gao
Yifan Yu
Liang Pan
Fangzhou Hong
Mingyuan Zhang
Chen Change Loy
Lei Yang
Ziwei Liu
    3DH
ArXivPDFHTML
Abstract

4D human sensing and modeling are fundamental tasks in vision and graphics with numerous applications. With the advances of new sensors and algorithms, there is an increasing demand for more versatile datasets. In this work, we contribute HuMMan, a large-scale multi-modal 4D human dataset with 1000 human subjects, 400k sequences and 60M frames. HuMMan has several appealing properties: 1) multi-modal data and annotations including color images, point clouds, keypoints, SMPL parameters, and textured meshes; 2) popular mobile device is included in the sensor suite; 3) a set of 500 actions, designed to cover fundamental movements; 4) multiple tasks such as action recognition, pose estimation, parametric human recovery, and textured mesh reconstruction are supported and evaluated. Extensive experiments on HuMMan voice the need for further study on challenges such as fine-grained action recognition, dynamic human mesh reconstruction, point cloud-based parametric human recovery, and cross-device domain gaps.

View on arXiv
Comments on this paper