29
0

Conditioning Matters: Training Diffusion Policies is Faster Than You Think

Abstract

Diffusion policies have emerged as a mainstream paradigm for building vision-language-action (VLA) models. Although they demonstrate strong robot control capabilities, their training efficiency remains suboptimal. In this work, we identify a fundamental challenge in conditional diffusion policy training: when generative conditions are hard to distinguish, the training objective degenerates into modeling the marginal action distribution, a phenomenon we term loss collapse. To overcome this, we propose Cocos, a simple yet general solution that modifies the source distribution in the conditional flow matching to be condition-dependent. By anchoring the source distribution around semantics extracted from condition inputs, Cocos encourages stronger condition integration and prevents the loss collapse. We provide theoretical justification and extensive empirical results across simulation and real-world benchmarks. Our method achieves faster convergence and higher success rates than existing approaches, matching the performance of large-scale pre-trained VLAs using significantly fewer gradient steps and parameters. Cocos is lightweight, easy to implement, and compatible with diverse policy architectures, offering a general-purpose improvement to diffusion policy training.

View on arXiv
@article{dong2025_2505.11123,
  title={ Conditioning Matters: Training Diffusion Policies is Faster Than You Think },
  author={ Zibin Dong and Yicheng Liu and Yinchuan Li and Hang Zhao and Jianye Hao },
  journal={arXiv preprint arXiv:2505.11123},
  year={ 2025 }
}
Comments on this paper