ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2511.00917
126
0
v1v2 (latest)

Maestro: Orchestrating Robotics Modules with Vision-Language Models for Zero-Shot Generalist Robots

2 November 2025
Junyao Shi
Rujia Yang
Kaitian Chao
Selina Bingqing Wan
Yifei Shao
Jiahui Lei
Jianing Qian
Long Le
Pratik Chaudhari
Kostas Daniilidis
Chuan Wen
Dinesh Jayaraman
    LM&Ro
ArXiv (abs)PDFHTML
Main:6 Pages
5 Figures
Bibliography:3 Pages
4 Tables
Appendix:2 Pages
Abstract

Today's best-explored routes towards generalist robots center on collecting ever larger "observations-in actions-out" robotics datasets to train large end-to-end models, copying a recipe that has worked for vision-language models (VLMs). We pursue a road less traveled: building generalist policies directly around VLMs by augmenting their general capabilities with specific robot capabilities encapsulated in a carefully curated set of perception, planning, and control modules. In Maestro, a VLM coding agent dynamically composes these modules into a programmatic policy for the current task and scenario. Maestro's architecture benefits from a streamlined closed-loop interface without many manually imposed structural constraints, and a comprehensive and diverse tool repertoire. As a result, it largely surpasses today's VLA models for zero-shot performance on challenging manipulation skills. Further, Maestro is easily extensible to incorporate new modules, easily editable to suit new embodiments such as a quadruped-mounted arm, and even easily adapts from minimal real-world experiences through local code edits.

View on arXiv
Comments on this paper