ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.19024
52
1

Ground-level Viewpoint Vision-and-Language Navigation in Continuous Environments

26 February 2025
Zerui Li
Gengze Zhou
Haodong Hong
Yanyan Shao
Wenqi Lyu
Yanyuan Qiao
Qi Wu
ArXivPDFHTML
Abstract

Vision-and-Language Navigation (VLN) empowers agents to associate time-sequenced visual observations with corresponding instructions to make sequential decisions. However, generalization remains a persistent challenge, particularly when dealing with visually diverse scenes or transitioning from simulated environments to real-world deployment. In this paper, we address the mismatch between human-centric instructions and quadruped robots with a low-height field of view, proposing a Ground-level Viewpoint Navigation (GVNav) approach to mitigate this issue. This work represents the first attempt to highlight the generalization gap in VLN across varying heights of visual observation in realistic robot deployments. Our approach leverages weighted historical observations as enriched spatiotemporal contexts for instruction following, effectively managing feature collisions within cells by assigning appropriate weights to identical features across different viewpoints. This enables low-height robots to overcome challenges such as visual obstructions and perceptual mismatches. Additionally, we transfer the connectivity graph from the HM3D and Gibson datasets as an extra resource to enhance spatial priors and a more comprehensive representation of real-world scenarios, leading to improved performance and generalizability of the waypoint predictor in real-world environments. Extensive experiments demonstrate that our Ground-level Viewpoint Navigation (GVnav) approach significantly improves performance in both simulated environments and real-world deployments with quadruped robots.

View on arXiv
@article{li2025_2502.19024,
  title={ Ground-level Viewpoint Vision-and-Language Navigation in Continuous Environments },
  author={ Zerui Li and Gengze Zhou and Haodong Hong and Yanyan Shao and Wenqi Lyu and Yanyuan Qiao and Qi Wu },
  journal={arXiv preprint arXiv:2502.19024},
  year={ 2025 }
}
Comments on this paper