ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.07819
16
0

H3^{\mathbf{3}}3DP: Triply-Hierarchical Diffusion Policy for Visuomotor Learning

12 May 2025
Yiyang Lu
Yufeng Tian
Zhecheng Yuan
X. Wang
Pu Hua
Zhengrong Xue
Huazhe Xu
ArXivPDFHTML
Abstract

Visuomotor policy learning has witnessed substantial progress in robotic manipulation, with recent approaches predominantly relying on generative models to model the action distribution. However, these methods often overlook the critical coupling between visual perception and action prediction. In this work, we introduce \textbf{Triply-Hierarchical Diffusion Policy}~(\textbf{H^{\mathbf{3}}DP}), a novel visuomotor learning framework that explicitly incorporates hierarchical structures to strengthen the integration between visual features and action generation. H3^{3}3DP contains 3\mathbf{3}3 levels of hierarchy: (1) depth-aware input layering that organizes RGB-D observations based on depth information; (2) multi-scale visual representations that encode semantic features at varying levels of granularity; and (3) a hierarchically conditioned diffusion process that aligns the generation of coarse-to-fine actions with corresponding visual features. Extensive experiments demonstrate that H3^{3}3DP yields a +27.5%\mathbf{+27.5\%}+27.5% average relative improvement over baselines across 44\mathbf{44}44 simulation tasks and achieves superior performance in 4\mathbf{4}4 challenging bimanual real-world manipulation tasks. Project Page:this https URL.

View on arXiv
@article{lu2025_2505.07819,
  title={ H$^{\mathbf{3}}$DP: Triply-Hierarchical Diffusion Policy for Visuomotor Learning },
  author={ Yiyang Lu and Yufeng Tian and Zhecheng Yuan and Xianbang Wang and Pu Hua and Zhengrong Xue and Huazhe Xu },
  journal={arXiv preprint arXiv:2505.07819},
  year={ 2025 }
}
Comments on this paper