ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.16649
  4. Cited By
CLIP-Nav: Using CLIP for Zero-Shot Vision-and-Language Navigation

CLIP-Nav: Using CLIP for Zero-Shot Vision-and-Language Navigation

30 November 2022
Vishnu Sashank Dorbala
Gunnar A. Sigurdsson
Robinson Piramuthu
Jesse Thomason
Gaurav Sukhatme
    LM&Ro
ArXivPDFHTML

Papers citing "CLIP-Nav: Using CLIP for Zero-Shot Vision-and-Language Navigation"

14 / 14 papers shown
Title
ELA-ZSON: Efficient Layout-Aware Zero-Shot Object Navigation Agent with Hierarchical Planning
ELA-ZSON: Efficient Layout-Aware Zero-Shot Object Navigation Agent with Hierarchical Planning
Jiawei Hou
Yuting Xiao
Xiangyang Xue
Taiping Zeng
39
0
0
09 May 2025
SCAM: A Real-World Typographic Robustness Evaluation for Multimodal Foundation Models
SCAM: A Real-World Typographic Robustness Evaluation for Multimodal Foundation Models
Justus Westerhoff
Erblina Purellku
Jakob Hackstein
Jonas Loos
Leo Pinetzki
Lorenz Hufe
AAML
28
0
0
07 Apr 2025
TRAVEL: Training-Free Retrieval and Alignment for Vision-and-Language Navigation
TRAVEL: Training-Free Retrieval and Alignment for Vision-and-Language Navigation
Navid Rajabi
Jana Kosecka
LM&Ro
3DV
53
0
0
11 Feb 2025
Demonstrating CavePI: Autonomous Exploration of Underwater Caves by Semantic Guidance
Demonstrating CavePI: Autonomous Exploration of Underwater Caves by Semantic Guidance
Alankrit Gupta
Adnan Abdullah
Xianyao Li
Vaishnav Ramesh
Ioannis M. Rekleitis
Md Jahidul Islam
52
0
0
07 Feb 2025
Open-Nav: Exploring Zero-Shot Vision-and-Language Navigation in Continuous Environment with Open-Source LLMs
Open-Nav: Exploring Zero-Shot Vision-and-Language Navigation in Continuous Environment with Open-Source LLMs
Yanyuan Qiao
Wenqi Lyu
Hui Wang
Zixu Wang
Zerui Li
Yuan Zhang
Mingkui Tan
Qi Wu
LRM
36
2
0
27 Sep 2024
Affordance-Guided Reinforcement Learning via Visual Prompting
Affordance-Guided Reinforcement Learning via Visual Prompting
Olivia Y. Lee
Annie Xie
Kuan Fang
Karl Pertsch
Chelsea Finn
OffRL
LM&Ro
67
7
0
14 Jul 2024
Can LLMs Generate Human-Like Wayfinding Instructions? Towards
  Platform-Agnostic Embodied Instruction Synthesis
Can LLMs Generate Human-Like Wayfinding Instructions? Towards Platform-Agnostic Embodied Instruction Synthesis
Vishnu Sashank Dorbala
Sanjoy Chowdhury
Dinesh Manocha
LM&Ro
25
0
0
18 Mar 2024
Advances in Embodied Navigation Using Large Language Models: A Survey
Advances in Embodied Navigation Using Large Language Models: A Survey
Jinzhou Lin
Han Gao
Xuxiang Feng
Rongtao Xu
Changwei Wang
Man Zhang
Li Guo
Shibiao Xu
LM&Ro
LLMAG
63
9
0
01 Nov 2023
NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large
  Language Models
NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models
Gengze Zhou
Yicong Hong
Qi Wu
ELM
LM&Ro
LLMAG
LRM
23
139
0
26 May 2023
Multimodal Grounding for Embodied AI via Augmented Reality Headsets for
  Natural Language Driven Task Planning
Multimodal Grounding for Embodied AI via Augmented Reality Headsets for Natural Language Driven Task Planning
Selma Wanna
Fabian Parra
R. Valner
Karl Kruusamäe
Mitch Pryor
LM&Ro
22
2
0
26 Apr 2023
FM-Loc: Using Foundation Models for Improved Vision-based Localization
FM-Loc: Using Foundation Models for Improved Vision-based Localization
Reihaneh Mirjalili
Michael Krawez
Wolfram Burgard
VLM
27
15
0
14 Apr 2023
DialFRED: Dialogue-Enabled Agents for Embodied Instruction Following
DialFRED: Dialogue-Enabled Agents for Embodied Instruction Following
Xiaofeng Gao
Qiaozi Gao
Ran Gong
Kaixiang Lin
Govind Thattai
Gaurav Sukhatme
LM&Ro
78
70
0
27 Feb 2022
TEACh: Task-driven Embodied Agents that Chat
TEACh: Task-driven Embodied Agents that Chat
Aishwarya Padmakumar
Jesse Thomason
Ayush Shrivastava
P. Lange
Anjali Narayan-Chen
Spandana Gella
Robinson Piramithu
Gökhan Tür
Dilek Z. Hakkani-Tür
LM&Ro
155
180
0
01 Oct 2021
How Much Can CLIP Benefit Vision-and-Language Tasks?
How Much Can CLIP Benefit Vision-and-Language Tasks?
Sheng Shen
Liunian Harold Li
Hao Tan
Mohit Bansal
Anna Rohrbach
Kai-Wei Chang
Z. Yao
Kurt Keutzer
CLIP
VLM
MLLM
188
405
0
13 Jul 2021
1