ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.19336
  4. Cited By
IVLMap: Instance-Aware Visual Language Grounding for Consumer Robot
  Navigation

IVLMap: Instance-Aware Visual Language Grounding for Consumer Robot Navigation

28 March 2024
Jiacui Huang
Hongtao Zhang
Mingbo Zhao
Zhou Wu
    LM&Ro
ArXivPDFHTML

Papers citing "IVLMap: Instance-Aware Visual Language Grounding for Consumer Robot Navigation"

5 / 5 papers shown
Title
Mobile ALOHA: Learning Bimanual Mobile Manipulation with Low-Cost
  Whole-Body Teleoperation
Mobile ALOHA: Learning Bimanual Mobile Manipulation with Low-Cost Whole-Body Teleoperation
Zipeng Fu
Tony Zhao
Chelsea Finn
105
287
0
04 Jan 2024
Visual Language Maps for Robot Navigation
Visual Language Maps for Robot Navigation
Chen Huang
Oier Mees
Andy Zeng
Wolfram Burgard
LM&Ro
145
343
0
11 Oct 2022
Iterative Vision-and-Language Navigation
Iterative Vision-and-Language Navigation
Jacob Krantz
Shurjo Banerjee
Wang Zhu
Jason J. Corso
Peter Anderson
Stefan Lee
Jesse Thomason
LM&Ro
40
18
0
06 Oct 2022
Open-vocabulary Queryable Scene Representations for Real World Planning
Open-vocabulary Queryable Scene Representations for Real World Planning
Boyuan Chen
F. Xia
Brian Ichter
Kanishka Rao
K. Gopalakrishnan
Michael S. Ryoo
Austin Stone
Daniel Kappler
LM&Ro
144
181
0
20 Sep 2022
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language,
  Vision, and Action
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action
Dhruv Shah
B. Osinski
Brian Ichter
Sergey Levine
LM&Ro
139
436
0
10 Jul 2022
1