ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.15715
56
0

Experience-based Optimal Motion Planning Algorithm for Solving Difficult Planning Problems Using a Limited Dataset

19 March 2025
Ryota Takamido
Jun Ota
ArXivPDFHTML
Abstract

This study aims to address the key challenge of obtaining a high-quality solution path within a short calculation time by generalizing a limited dataset. In the informed experience-driven random trees connect star (IERTC*) process, the algorithm flexibly explores the search trees by morphing the micro paths generated from a single experience while reducing the path cost by introducing a re-wiring process and an informed sampling process. The core idea of this algorithm is to apply different strategies depending on the complexity of the local environment; for example, it adopts a more complex curved trajectory if obstacles are densely arranged near the search tree, and it adopts a simpler straight line if the local environment is sparse. The results of experiments using a general motion benchmark test revealed that IERTC* significantly improved the planning success rate in difficult problems in the cluttered environment (an average improvement of 49.3% compared to the state-of-the-art algorithm) while also significantly reducing the solution cost (a reduction of 56.3%) when using one hundred experiences. Furthermore, the results demonstrated outstanding planning performance even when only one experience was available (a 43.8% improvement in success rate and a 57.8% reduction in solution cost).

View on arXiv
@article{takamido2025_2503.15715,
  title={ Experience-based Optimal Motion Planning Algorithm for Solving Difficult Planning Problems Using a Limited Dataset },
  author={ Ryota Takamido and Jun Ota },
  journal={arXiv preprint arXiv:2503.15715},
  year={ 2025 }
}
Comments on this paper