ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.03729
26
0

Visual Imitation Enables Contextual Humanoid Control

6 May 2025
Arthur Allshire
Hongsuk Choi
Junyi Zhang
David McAllister
Anthony Zhang
C. Kim
Trevor Darrell
Pieter Abbeel
Jitendra Malik
Angjoo Kanazawa
    LM&Ro
ArXivPDFHTML
Abstract

How can we teach humanoids to climb staircases and sit on chairs using the surrounding environment context? Arguably, the simplest way is to just show them-casually capture a human motion video and feed it to humanoids. We introduce VIDEOMIMIC, a real-to-sim-to-real pipeline that mines everyday videos, jointly reconstructs the humans and the environment, and produces whole-body control policies for humanoid robots that perform the corresponding skills. We demonstrate the results of our pipeline on real humanoid robots, showing robust, repeatable contextual control such as staircase ascents and descents, sitting and standing from chairs and benches, as well as other dynamic whole-body skills-all from a single policy, conditioned on the environment and global root commands. VIDEOMIMIC offers a scalable path towards teaching humanoids to operate in diverse real-world environments.

View on arXiv
@article{allshire2025_2505.03729,
  title={ Visual Imitation Enables Contextual Humanoid Control },
  author={ Arthur Allshire and Hongsuk Choi and Junyi Zhang and David McAllister and Anthony Zhang and Chung Min Kim and Trevor Darrell and Pieter Abbeel and Jitendra Malik and Angjoo Kanazawa },
  journal={arXiv preprint arXiv:2505.03729},
  year={ 2025 }
}
Comments on this paper