44

Diffusion-Based Imaginative Coordination for Bimanual Manipulation

Huilin Xu
Jian Ding
Jiakun Xu
Ruixiang Wang
Jun Chen
Jinjie Mai
Yanwei Fu
Bernard Ghanem
Feng Xu
Mohamed Elhoseiny
Main:8 Pages
10 Figures
Bibliography:3 Pages
16 Tables
Appendix:4 Pages
Abstract

Bimanual manipulation is crucial in robotics, enabling complex tasks in industrial automation and household services. However, it poses significant challenges due to the high-dimensional action space and intricate coordination requirements. While video prediction has been recently studied for representation learning and control, leveraging its ability to capture rich dynamic and behavioral information, its potential for enhancing bimanual coordination remains underexplored. To bridge this gap, we propose a unified diffusion-based framework for the joint optimization of video and action prediction. Specifically, we propose a multi-frame latent prediction strategy that encodes future states in a compressed latent space, preserving task-relevant features. Furthermore, we introduce a unidirectional attention mechanism where video prediction is conditioned on the action, while action prediction remains independent of video prediction. This design allows us to omit video prediction during inference, significantly enhancing efficiency. Experiments on two simulated benchmarks and a real-world setting demonstrate a significant improvement in the success rate over the strong baseline ACT using our method, achieving a \textbf{24.9\%} increase on ALOHA, an \textbf{11.1\%} increase on RoboTwin, and a \textbf{32.5\%} increase in real-world experiments. Our models and code are publicly available atthis https URL.

View on arXiv
Comments on this paper