4
0

Train a Multi-Task Diffusion Policy on RLBench-18 in One Day with One GPU

Abstract

We present a method for training multi-task vision-language robotic diffusion policies that reduces training time and memory usage by an order of magnitude. This improvement arises from a previously underexplored distinction between action diffusion and the image diffusion techniques that inspired it: image generation targets are high-dimensional, while robot actions lie in a much lower-dimensional space. Meanwhile, the vision-language conditions for action generation remain high-dimensional. Our approach, Mini-Diffuser, exploits this asymmetry by introducing Level-2 minibatching, which pairs multiple noised action samples with each vision-language condition, instead of the conventional one-to-one sampling strategy. To support this batching scheme, we introduce architectural adaptations to the diffusion transformer that prevent information leakage across samples while maintaining full conditioning access. In RLBench simulations, Mini-Diffuser achieves 95\% of the performance of state-of-the-art multi-task diffusion policies, while using only 5\% of the training time and 7\% of the memory. Real-world experiments further validate that Mini-Diffuser preserves the key strengths of diffusion-based policies, including the ability to model multimodal action distributions and produce behavior conditioned on diverse perceptual inputs. Code available atthis http URL.

View on arXiv
@article{hu2025_2505.09430,
  title={ Train a Multi-Task Diffusion Policy on RLBench-18 in One Day with One GPU },
  author={ Yutong Hu and Pinhao Song and Kehan Wen and Renaud Detry },
  journal={arXiv preprint arXiv:2505.09430},
  year={ 2025 }
}
Comments on this paper