ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.03208
67
0

Embodied Escaping: End-to-End Reinforcement Learning for Robot Navigation in Narrow Environment

5 March 2025
Han Zheng
J. Zhang
Mingyang Jiang
Peiyuan Liu
Danni Liu
Tong Qin
Ming Yang
ArXivPDFHTML
Abstract

Autonomous navigation is a fundamental task for robot vacuum cleaners in indoor environments. Since their core function is to clean entire areas, robots inevitably encounter dead zones in cluttered and narrow scenarios. Existing planning methods often fail to escape due to complex environmental constraints, high-dimensional search spaces, and high difficulty maneuvers. To address these challenges, this paper proposes an embodied escaping model that leverages reinforcement learning-based policy with an efficient action mask for dead zone escaping. To alleviate the issue of the sparse reward in training, we introduce a hybrid training policy that improves learning efficiency. In handling redundant and ineffective action options, we design a novel action representation to reshape the discrete action space with a uniform turning radius. Furthermore, we develop an action mask strategy to select valid action quickly, balancing precision and efficiency. In real-world experiments, our robot is equipped with a Lidar, IMU, and two-wheel encoders. Extensive quantitative and qualitative experiments across varying difficulty levels demonstrate that our robot can consistently escape from challenging dead zones. Moreover, our approach significantly outperforms compared path planning and reinforcement learning methods in terms of success rate and collision avoidance.

View on arXiv
@article{zheng2025_2503.03208,
  title={ Embodied Escaping: End-to-End Reinforcement Learning for Robot Navigation in Narrow Environment },
  author={ Han Zheng and Jiale Zhang and Mingyang Jiang and Peiyuan Liu and Danni Liu and Tong Qin and Ming Yang },
  journal={arXiv preprint arXiv:2503.03208},
  year={ 2025 }
}
Comments on this paper