Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2311.08244
Cited By
Language and Sketching: An LLM-driven Interactive Multimodal Multitask Robot Navigation Framework
14 November 2023
Weiqin Zu
Wenbin Song
Ruiqing Chen
Ze Guo
Fanglei Sun
Zheng Tian
Wei Pan
Jun Wang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Language and Sketching: An LLM-driven Interactive Multimodal Multitask Robot Navigation Framework"
8 / 8 papers shown
Title
Multi-Agent LLM Actor-Critic Framework for Social Robot Navigation
Weizheng Wang
Ike Obi
Byung-Cheol Min
LLMAG
57
1
0
12 Mar 2025
MARLIN: Multi-Agent Reinforcement Learning Guided by Language-Based Inter-Robot Negotiation
Toby Godfrey
William Hunt
Mohammad D. Soorati
61
1
0
18 Oct 2024
CANVAS: Commonsense-Aware Navigation System for Intuitive Human-Robot Interaction
Suhwan Choi
Yongjun Cho
Minchan Kim
Jaeyoon Jung
Myunchul Joe
...
Sungwoong Kim
Sungjae Lee
Hwiseong Park
Jiwan Chung
Youngjae Yu
45
0
0
02 Oct 2024
Arena 4.0: A Comprehensive ROS2 Development and Benchmarking Platform for Human-centric Navigation Using Generative-Model-based Environment Generation
Volodymyr Shcherbyna1
Linh Kästner
Diego Diaz
Huu Giang Nguyen
Maximilian Ho-Kyoung Schreff
Tim Lenz
Jonas Kreutz
Ahmed Martban
Huajian Zeng
Harold Soh
37
1
0
19 Sep 2024
LMMCoDrive: Cooperative Driving with Large Multimodal Model
Haichao Liu
Ruoyu Yao
Zhenmin Huang
Shaojie Shen
Jun Ma
28
3
0
18 Sep 2024
Perceive, Reflect, and Plan: Designing LLM Agent for Goal-Directed City Navigation without Instructions
Qingbin Zeng
Qinglong Yang
Shunan Dong
Heming Du
Liang Zheng
Fengli Xu
Yong Li
LLMAG
LM&Ro
45
8
0
08 Aug 2024
ROSGPT_Vision: Commanding Robots Using Only Language Models' Prompts
Bilel Benjdira
Anis Koubaa
Anas M. Ali
LM&Ro
30
3
0
22 Aug 2023
Chat with the Environment: Interactive Multimodal Perception Using Large Language Models
Xufeng Zhao
Mengdi Li
C. Weber
Muhammad Burhan Hafez
S. Wermter
LLMAG
LM&Ro
LRM
107
47
0
14 Mar 2023
1