ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.13586
31
3

Preference Aligned Diffusion Planner for Quadrupedal Locomotion Control

17 October 2024
Xinyi Yuan
Zhiwei Shang
Zifan Wang
Chenkai Wang
Zhao Shan
Zhenchao Qi
Meixin Zhu
Chenjia Bai
Weiwei Wan
Kensuke Harada
ArXivPDFHTML
Abstract

Diffusion models demonstrate superior performance in capturing complex distributions from large-scale datasets, providing a promising solution for quadrupedal locomotion control. However, the robustness of the diffusion planner is inherently dependent on the diversity of the pre-collected datasets. To mitigate this issue, we propose a two-stage learning framework to enhance the capability of the diffusion planner under limited dataset (reward-agnostic). Through the offline stage, the diffusion planner learns the joint distribution of state-action sequences from expert datasets without using reward labels. Subsequently, we perform the online interaction in the simulation environment based on the trained offline planner, which significantly diversified the original behavior and thus improves the robustness. Specifically, we propose a novel weak preference labeling method without the ground-truth reward or human preferences. The proposed method exhibits superior stability and velocity tracking accuracy in pacing, trotting, and bounding gait under different speeds and can perform a zero-shot transfer to the real Unitree Go1 robots. The project website for this paper is atthis https URL.

View on arXiv
@article{yuan2025_2410.13586,
  title={ Preference Aligned Diffusion Planner for Quadrupedal Locomotion Control },
  author={ Xinyi Yuan and Zhiwei Shang and Zifan Wang and Chenkai Wang and Zhao Shan and Meixin Zhu and Chenjia Bai and Xuelong Li and Weiwei Wan and Kensuke Harada },
  journal={arXiv preprint arXiv:2410.13586},
  year={ 2025 }
}
Comments on this paper