Touch2Shape: Touch-Conditioned 3D Diffusion for Shape Exploration and Reconstruction

Diffusion models have made breakthroughs in 3D generation tasks. Current 3D diffusion models focus on reconstructing target shape from images or a set of partial observations. While excelling in global context understanding, they struggle to capture the local details of complex shapes and limited to the occlusion and lighting conditions. To overcome these limitations, we utilize tactile images to capture the local 3D information and propose a Touch2Shape model, which leverages a touch-conditioned diffusion model to explore and reconstruct the target shape from touch. For shape reconstruction, we have developed a touch embedding module to condition the diffusion model in creating a compact representation and a touch shape fusion module to refine the reconstructed shape. For shape exploration, we combine the diffusion model with reinforcement learning to train a policy. This involves using the generated latent vector from the diffusion model to guide the touch exploration policy training through a novel reward design. Experiments validate the reconstruction quality thorough both qualitatively and quantitative analysis, and our touch exploration policy further boosts reconstruction performance.
View on arXiv@article{wang2025_2505.13091, title={ Touch2Shape: Touch-Conditioned 3D Diffusion for Shape Exploration and Reconstruction }, author={ Yuanbo Wang and Zhaoxuan Zhang and Jiajin Qiu and Dilong Sun and Zhengyu Meng and Xiaopeng Wei and Xin Yang }, journal={arXiv preprint arXiv:2505.13091}, year={ 2025 } }