ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.20390
38
4

InterMimic: Towards Universal Whole-Body Control for Physics-Based Human-Object Interactions

27 February 2025
Sirui Xu
Hung Yu Ling
Yu-xiong Wang
Liang-Yan Gui
ArXivPDFHTML
Abstract

Achieving realistic simulations of humans interacting with a wide range of objects has long been a fundamental goal. Extending physics-based motion imitation to complex human-object interactions (HOIs) is challenging due to intricate human-object coupling, variability in object geometries, and artifacts in motion capture data, such as inaccurate contacts and limited hand detail. We introduce InterMimic, a framework that enables a single policy to robustly learn from hours of imperfect MoCap data covering diverse full-body interactions with dynamic and varied objects. Our key insight is to employ a curriculum strategy -- perfect first, then scale up. We first train subject-specific teacher policies to mimic, retarget, and refine motion capture data. Next, we distill these teachers into a student policy, with the teachers acting as online experts providing direct supervision, as well as high-quality references. Notably, we incorporate RL fine-tuning on the student policy to surpass mere demonstration replication and achieve higher-quality solutions. Our experiments demonstrate that InterMimic produces realistic and diverse interactions across multiple HOI datasets. The learned policy generalizes in a zero-shot manner and seamlessly integrates with kinematic generators, elevating the framework from mere imitation to generative modeling of complex human-object interactions.

View on arXiv
@article{xu2025_2502.20390,
  title={ InterMimic: Towards Universal Whole-Body Control for Physics-Based Human-Object Interactions },
  author={ Sirui Xu and Hung Yu Ling and Yu-Xiong Wang and Liang-Yan Gui },
  journal={arXiv preprint arXiv:2502.20390},
  year={ 2025 }
}
Comments on this paper