Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2501.00296
Cited By
v1
v2
v3 (latest)
From Pixels to Predicates: Learning Symbolic World Models via Pretrained Vision-Language Models
31 December 2024
Ashay Athalye
Nishanth Kumar
Tom Silver
Yichao Liang
Tomás Lozano-Pérez
Leslie Pack Kaelbling
Leslie Kaelbling
LM&Ro
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"From Pixels to Predicates: Learning Symbolic World Models via Pretrained Vision-Language Models"
7 / 7 papers shown
Title
Learning Compositional Behaviors from Demonstration and Language
Weiyu Liu
Neil Nie
Ruohan Zhang
Jiayuan Mao
Jiajun Wu
LM&Ro
60
6
0
28 May 2025
Coloring Between the Lines: Personalization in the Null Space of Planning Constraints
Tom Silver
Rajat Kumar Jenamani
Ziang Liu
Ben Dodson
Tapomayukh Bhattacharjee
73
0
0
21 May 2025
ViPlan: A Benchmark for Visual Planning with Symbolic Predicates and Vision-Language Models
Matteo Merler
Nicola Dainese
Minttu Alakuijala
Giovanni Bonetta
Pietro Ferrazzi
Yu Tian
Bernardo Magnini
Pekka Marttinen
LM&Ro
VLM
117
0
0
19 May 2025
Symbolically-Guided Visual Plan Inference from Uncurated Video Data
Wenyan Yang
Ahmet Tikna
Yi Zhao
Yuying Zhang
Luigi Palopoli
Marco Roveri
Joni Pajarinen
VGen
55
0
0
13 May 2025
Bilevel Learning for Bilevel Planning
Bowen Li
Tom Silver
Sebastian A. Scherer
Alexander G. Gray
274
2
0
12 Feb 2025
Open-World Task and Motion Planning via Vision-Language Model Inferred Constraints
Nishanth Kumar
F. Ramos
Dieter Fox
Caelan Reed Garrett
Tomás Lozano-Pérez
Leslie Pack Kaelbling
Caelan Reed Garrett
LRM
LM&Ro
144
5
0
13 Nov 2024
Verifiably Following Complex Robot Instructions with Foundation Models
Benedict Quartey
Eric Rosen
Stefanie Tellex
George Konidaris
LM&Ro
130
13
0
18 Feb 2024
1