47
0

MaskPlanner: Learning-Based Object-Centric Motion Generation from 3D Point Clouds

Abstract

Object-Centric Motion Generation (OCMG) plays a key role in a variety of industrial applications\unicodex2014\unicode{x2014}such as robotic spray painting and welding\unicodex2014\unicode{x2014}requiring efficient, scalable, and generalizable algorithms to plan multiple long-horizon trajectories over free-form 3D objects. However, existing solutions rely on specialized heuristics, expensive optimization routines, or restrictive geometry assumptions that limit their adaptability to real-world scenarios. In this work, we introduce a novel, fully data-driven framework that tackles OCMG directly from 3D point clouds, learning to generalize expert path patterns across free-form surfaces. We propose MaskPlanner, a deep learning method that predicts local path segments for a given object while simultaneously inferring "path masks" to group these segments into distinct paths. This design induces the network to capture both local geometric patterns and global task requirements in a single forward pass. Extensive experimentation on a realistic robotic spray painting scenario shows that our approach attains near-complete coverage (above 99%) for unseen objects, while it remains task-agnostic and does not explicitly optimize for paint deposition. Moreover, our real-world validation on a 6-DoF specialized painting robot demonstrates that the generated trajectories are directly executable and yield expert-level painting quality. Our findings crucially highlight the potential of the proposed learning method for OCMG to reduce engineering overhead and seamlessly adapt to several industrial use cases.

View on arXiv
@article{tiboni2025_2502.18745,
  title={ MaskPlanner: Learning-Based Object-Centric Motion Generation from 3D Point Clouds },
  author={ Gabriele Tiboni and Raffaello Camoriano and Tatiana Tommasi },
  journal={arXiv preprint arXiv:2502.18745},
  year={ 2025 }
}
Comments on this paper