ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.18780
87
0
v1v2 (latest)

One Policy but Many Worlds: A Scalable Unified Policy for Versatile Humanoid Locomotion

24 May 2025
Yahao Fan
Tianxiang Gui
Kaiyang Ji
Shutong Ding
C. Zhang
Jiayuan Gu
Jingyi Yu
Jingya Wang
Ye-ling Shi
    VGen
ArXiv (abs)PDFHTML
Main:9 Pages
16 Figures
Bibliography:6 Pages
14 Tables
Appendix:11 Pages
Abstract

Humanoid locomotion faces a critical scalability challenge: traditional reinforcement learning (RL) methods require task-specific rewards and struggle to leverage growing datasets, even as more training terrains are introduced. We propose DreamPolicy, a unified framework that enables a single policy to master diverse terrains and generalize zero-shot to unseen scenarios by systematically integrating offline data and diffusion-driven motion synthesis. At its core, DreamPolicy introduces Humanoid Motion Imagery (HMI) - future state predictions synthesized through an autoregressive terrain-aware diffusion planner curated by aggregating rollouts from specialized policies across various distinct terrains. Unlike human motion datasets requiring laborious retargeting, our data directly captures humanoid kinematics, enabling the diffusion planner to synthesize "dreamed" trajectories that encode terrain-specific physical constraints. These trajectories act as dynamic objectives for our HMI-conditioned policy, bypassing manual reward engineering and enabling cross-terrain generalization. DreamPolicy addresses the scalability limitations of prior methods: while traditional RL fails to exploit growing datasets, our framework scales seamlessly with more offline data. As the dataset expands, the diffusion prior learns richer locomotion skills, which the policy leverages to master new terrains without retraining. Experiments demonstrate that DreamPolicy achieves average 90% success rates in training environments and an average of 20% higher success on unseen terrains than the prevalent method. It also generalizes to perturbed and composite scenarios where prior approaches collapse. By unifying offline data, diffusion-based trajectory synthesis, and policy optimization, DreamPolicy overcomes the "one task, one policy" bottleneck, establishing a paradigm for scalable, data-driven humanoid control.

View on arXiv
@article{fan2025_2505.18780,
  title={ One Policy but Many Worlds: A Scalable Unified Policy for Versatile Humanoid Locomotion },
  author={ Yahao Fan and Tianxiang Gui and Kaiyang Ji and Shutong Ding and Chixuan Zhang and Jiayuan Gu and Jingyi Yu and Jingya Wang and Ye Shi },
  journal={arXiv preprint arXiv:2505.18780},
  year={ 2025 }
}
Comments on this paper