ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.12273
  4. Cited By
Multimodal Human-Autonomous Agents Interaction Using Pre-Trained Language and Visual Foundation Models

Multimodal Human-Autonomous Agents Interaction Using Pre-Trained Language and Visual Foundation Models

31 December 2024
Linus Nwankwo
Elmar Rueckert
ArXivPDFHTML

Papers citing "Multimodal Human-Autonomous Agents Interaction Using Pre-Trained Language and Visual Foundation Models"

3 / 3 papers shown
Title
ReLI: A Language-Agnostic Approach to Human-Robot Interaction
ReLI: A Language-Agnostic Approach to Human-Robot Interaction
Linus Nwankwo
Bjoern Ellensohn
Ozan Özdenizci
Elmar Rueckert
LM&Ro
58
0
0
03 May 2025
ROMR: A ROS-based Open-source Mobile Robot
ROMR: A ROS-based Open-source Mobile Robot
Linus Nwankwo
Clemens Fritze
Konrad Bartsch
Elmar Rueckert
41
9
0
04 Oct 2022
Zero-Shot Text-to-Image Generation
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
255
4,781
0
24 Feb 2021
1