ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.17791
33
0

LiDPM: Rethinking Point Diffusion for Lidar Scene Completion

24 April 2025
Tetiana Martyniuk
Gilles Puy
Alexandre Boulch
Renaud Marlet
Raoul de Charette
    DiffM
ArXivPDFHTML
Abstract

Training diffusion models that work directly on lidar points at the scale of outdoor scenes is challenging due to the difficulty of generating fine-grained details from white noise over a broad field of view. The latest works addressing scene completion with diffusion models tackle this problem by reformulating the original DDPM as a local diffusion process. It contrasts with the common practice of operating at the level of objects, where vanilla DDPMs are currently used. In this work, we close the gap between these two lines of work. We identify approximations in the local diffusion formulation, show that they are not required to operate at the scene level, and that a vanilla DDPM with a well-chosen starting point is enough for completion. Finally, we demonstrate that our method, LiDPM, leads to better results in scene completion on SemanticKITTI. The project page isthis https URL.

View on arXiv
@article{martyniuk2025_2504.17791,
  title={ LiDPM: Rethinking Point Diffusion for Lidar Scene Completion },
  author={ Tetiana Martyniuk and Gilles Puy and Alexandre Boulch and Renaud Marlet and Raoul de Charette },
  journal={arXiv preprint arXiv:2504.17791},
  year={ 2025 }
}
Comments on this paper