ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.06651
28
0

Collision avoidance from monocular vision trained with novel view synthesis

9 April 2025
Valentin Tordjman--Levavasseur
Stéphane Caron
ArXivPDFHTML
Abstract

Collision avoidance can be checked in explicit environment models such as elevation maps or occupancy grids, yet integrating such models with a locomotion policy requires accurate state estimation. In this work, we consider the question of collision avoidance from an implicit environment model. We use monocular RGB images as inputs and train a collisionavoidance policy from photorealistic images generated by 2D Gaussian splatting. We evaluate the resulting pipeline in realworld experiments under velocity commands that bring the robot on an intercept course with obstacles. Our results suggest that RGB images can be enough to make collision-avoidance decisions, both in the room where training data was collected and in out-of-distribution environments.

View on arXiv
@article{tordjman--levavasseur2025_2504.06651,
  title={ Collision avoidance from monocular vision trained with novel view synthesis },
  author={ Valentin Tordjman--Levavasseur and Stéphane Caron },
  journal={arXiv preprint arXiv:2504.06651},
  year={ 2025 }
}
Comments on this paper