Visuomotor policy learning has witnessed substantial progress in robotic manipulation, with recent approaches predominantly relying on generative models to model the action distribution. However, these methods often overlook the critical coupling between visual perception and action prediction. In this work, we introduce \textbf{Triply-Hierarchical Diffusion Policy}~(\textbf{H^{\mathbf{3}}DP}), a novel visuomotor learning framework that explicitly incorporates hierarchical structures to strengthen the integration between visual features and action generation. HDP contains levels of hierarchy: (1) depth-aware input layering that organizes RGB-D observations based on depth information; (2) multi-scale visual representations that encode semantic features at varying levels of granularity; and (3) a hierarchically conditioned diffusion process that aligns the generation of coarse-to-fine actions with corresponding visual features. Extensive experiments demonstrate that HDP yields a average relative improvement over baselines across simulation tasks and achieves superior performance in challenging bimanual real-world manipulation tasks. Project Page:this https URL.
View on arXiv@article{lu2025_2505.07819, title={ H$^{\mathbf{3}}$DP: Triply-Hierarchical Diffusion Policy for Visuomotor Learning }, author={ Yiyang Lu and Yufeng Tian and Zhecheng Yuan and Xianbang Wang and Pu Hua and Zhengrong Xue and Huazhe Xu }, journal={arXiv preprint arXiv:2505.07819}, year={ 2025 } }