ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.10027
25
0

E2Map: Experience-and-Emotion Map for Self-Reflective Robot Navigation with Language Models

16 September 2024
Chan Kim
Keonwoo Kim
Mintaek Oh
Hanbi Baek
Jiyang Lee
Donghwi Jung
Soojin Woo
Younkyung Woo
John Tucker
Roya Firoozi
Seung-Woo Seo
Mac Schwager
Seong-Woo Kim
    LM&Ro
ArXivPDFHTML
Abstract

Large language models (LLMs) have shown significant potential in guiding embodied agents to execute language instructions across a range of tasks, including robotic manipulation and navigation. However, existing methods are primarily designed for static environments and do not leverage the agent's own experiences to refine its initial plans. Given that real-world environments are inherently stochastic, initial plans based solely on LLMs' general knowledge may fail to achieve their objectives, unlike in static scenarios. To address this limitation, this study introduces the Experience-and-Emotion Map (E2Map), which integrates not only LLM knowledge but also the agent's real-world experiences, drawing inspiration from human emotional responses. The proposed methodology enables one-shot behavior adjustments by updating the E2Map based on the agent's experiences. Our evaluation in stochastic navigation environments, including both simulations and real-world scenarios, demonstrates that the proposed method significantly enhances performance in stochastic environments compared to existing LLM-based approaches. Code and supplementary materials are available atthis https URL.

View on arXiv
@article{kim2025_2409.10027,
  title={ E2Map: Experience-and-Emotion Map for Self-Reflective Robot Navigation with Language Models },
  author={ Chan Kim and Keonwoo Kim and Mintaek Oh and Hanbi Baek and Jiyang Lee and Donghwi Jung and Soojin Woo and Younkyung Woo and John Tucker and Roya Firoozi and Seung-Woo Seo and Mac Schwager and Seong-Woo Kim },
  journal={arXiv preprint arXiv:2409.10027},
  year={ 2025 }
}
Comments on this paper