ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.08438
37
1

Diffusion Models for Robotic Manipulation: A Survey

11 April 2025
Rosa Wolf
Yitian Shi
Sheng Liu
Rania Rayyes
ArXivPDFHTML
Abstract

Diffusion generative models have demonstrated remarkable success in visual domains such as image and video generation. They have also recently emerged as a promising approach in robotics, especially in robot manipulations. Diffusion models leverage a probabilistic framework, and they stand out with their ability to model multi-modal distributions and their robustness to high-dimensional input and output spaces. This survey provides a comprehensive review of state-of-the-art diffusion models in robotic manipulation, including grasp learning, trajectory planning, and data augmentation. Diffusion models for scene and image augmentation lie at the intersection of robotics and computer vision for vision-based tasks to enhance generalizability and data scarcity. This paper also presents the two main frameworks of diffusion models and their integration with imitation learning and reinforcement learning. In addition, it discusses the common architectures and benchmarks and points out the challenges and advantages of current state-of-the-art diffusion-based methods.

View on arXiv
@article{wolf2025_2504.08438,
  title={ Diffusion Models for Robotic Manipulation: A Survey },
  author={ Rosa Wolf and Yitian Shi and Sheng Liu and Rania Rayyes },
  journal={arXiv preprint arXiv:2504.08438},
  year={ 2025 }
}
Comments on this paper