ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.09672
9
14

Towards Autonomous Driving: a Multi-Modal 360∘^{\circ}∘ Perception Proposal

21 August 2020
Jorge Beltrán
Carlos Guindel
Irene Cortés
Alejandro Barrera
Armando Astudillo
Jesús Urdiales
Mario Álvarez
Farid Bekka
V. Milanés
F. García
    3DPC
ArXivPDFHTML
Abstract

In this paper, a multi-modal 360∘^{\circ}∘ framework for 3D object detection and tracking for autonomous vehicles is presented. The process is divided into four main stages. First, images are fed into a CNN network to obtain instance segmentation of the surrounding road participants. Second, LiDAR-to-image association is performed for the estimated mask proposals. Then, the isolated points of every object are processed by a PointNet ensemble to compute their corresponding 3D bounding boxes and poses. Lastly, a tracking stage based on Unscented Kalman Filter is used to track the agents along time. The solution, based on a novel sensor fusion configuration, provides accurate and reliable road environment detection. A wide variety of tests of the system, deployed in an autonomous vehicle, have successfully assessed the suitability of the proposed perception stack in a real autonomous driving application.

View on arXiv
Comments on this paper