Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2407.13466
Cited By
LIMT: Language-Informed Multi-Task Visual World Models
18 July 2024
Elie Aljalbout
Nikolaos Sotirakis
Patrick van der Smagt
Maximilian Karl
Nutan Chen
Re-assign community
ArXiv
PDF
HTML
Papers citing
"LIMT: Language-Informed Multi-Task Visual World Models"
7 / 7 papers shown
Title
Sharing Knowledge in Multi-Task Deep Reinforcement Learning
Carlo DÉramo
Davide Tateo
Andrea Bonarini
Marcello Restelli
Jan Peters
48
121
0
17 Jan 2024
Actor-Critic Model Predictive Control
Angel Romero
Yunlong Song
Davide Scaramuzza
28
35
0
16 Jun 2023
Vision-Language Models as Success Detectors
Yuqing Du
Ksenia Konyushkova
Misha Denil
A. Raju
Jessica Landon
Felix Hill
Nando de Freitas
Serkan Cabi
MLLM
LRM
84
76
0
13 Mar 2023
ProgPrompt: Generating Situated Robot Task Plans using Large Language Models
Ishika Singh
Valts Blukis
Arsalan Mousavian
Ankit Goyal
Danfei Xu
Jonathan Tremblay
D. Fox
Jesse Thomason
Animesh Garg
LM&Ro
LLMAG
112
616
0
22 Sep 2022
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action
Dhruv Shah
B. Osinski
Brian Ichter
Sergey Levine
LM&Ro
139
430
0
10 Jul 2022
Language-Conditioned Imitation Learning for Robot Manipulation Tasks
Simon Stepputtis
Joseph Campbell
Mariano Phielipp
Stefan Lee
Chitta Baral
H. B. Amor
LM&Ro
114
192
0
22 Oct 2020
Transferring End-to-End Visuomotor Control from Simulation to Real World for a Multi-Stage Task
Stephen James
Andrew J. Davison
Edward Johns
162
275
0
07 Jul 2017
1