ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2510.23763
66
0
v1v2v3 (latest)

RoboOmni: Proactive Robot Manipulation in Omni-modal Context

27 October 2025
Siyin Wang
Jinlan Fu
Feihong Liu
Xinzhe He
Huangxuan Wu
Junhao Shi
Kexin Huang
Zhaoye Fei
Jingjing Gong
Z. F. Wu
Yugang Jiang
See-Kiong Ng
Tat-Seng Chua
Xipeng Qiu
    LM&Ro
ArXiv (abs)PDFHTMLHuggingFace (52 upvotes)Github (41★)
Main:9 Pages
14 Figures
2 Tables
Appendix:22 Pages
Abstract

Recent advances in Multimodal Large Language Models (MLLMs) have driven rapid progress in Vision-Language-Action (VLA) models for robotic manipulation. Although effective in many scenarios, current approaches largely rely on explicit instructions, whereas in real-world interactions, humans rarely issue instructions directly. Effective collaboration requires robots to infer user intentions proactively. In this work, we introduce cross-modal contextual instructions, a new setting where intent is derived from spoken dialogue, environmental sounds, and visual cues rather than explicit commands. To address this new setting, we present RoboOmni, a Perceiver-Thinker-Talker-Executor framework based on end-to-end omni-modal LLMs that unifies intention recognition, interaction confirmation, and action execution. RoboOmni fuses auditory and visual signals spatiotemporally for robust intention recognition, while supporting direct speech interaction. To address the absence of training data for proactive intention recognition in robotic manipulation, we build OmniAction, comprising 140k episodes, 5k+ speakers, 2.4k event sounds, 640 backgrounds, and six contextual instruction types. Experiments in simulation and real-world settings show that RoboOmni surpasses text- and ASR-based baselines in success rate, inference speed, intention recognition, and proactive assistance.

View on arXiv
Comments on this paper