ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.15692
17
2

Walking with Terrain Reconstruction: Learning to Traverse Risky Sparse Footholds

24 September 2024
Ruiqi Yu
Qianshi Wang
Yizhen Wang
Zhicheng Wang
Jun Wu
Qiuguo Zhu
ArXivPDFHTML
Abstract

Traversing risky terrains with sparse footholds presents significant challenges for legged robots, requiring precise foot placement in safe areas. To acquire comprehensive exteroceptive information, prior studies have employed motion capture systems or mapping techniques to generate heightmap for locomotion policy. However, these approaches require specialized pipelines and often introduce additional noise. While depth images from egocentric vision systems are cost-effective, their limited field of view and sparse information hinder the integration of terrain structure details into implicit features, which are essential for generating precise actions. In this paper, we demonstrate that end-to-end reinforcement learning relying solely on proprioception and depth images is capable of traversing risky terrains with high sparsity and randomness. Our method introduces local terrain reconstruction, leveraging the benefits of clear features and sufficient information from the heightmap, which serves as an intermediary for visual feature extraction and motion generation. This allows the policy to effectively represent and memorize critical terrain information. We deploy the proposed framework on a low-cost quadrupedal robot, achieving agile and adaptive locomotion across various challenging terrains and showcasing outstanding performance in real-world scenarios. Video at:this http URL.

View on arXiv
@article{yu2025_2409.15692,
  title={ Walking with Terrain Reconstruction: Learning to Traverse Risky Sparse Footholds },
  author={ Ruiqi Yu and Qianshi Wang and Yizhen Wang and Zhicheng Wang and Jun Wu and Qiuguo Zhu },
  journal={arXiv preprint arXiv:2409.15692},
  year={ 2025 }
}
Comments on this paper