Título:
|
Deconvolutional networks for point-cloud vehicle detection and tracking in driving scenarios
|
Autor/a:
|
Vaquero Gómez, Víctor; del Pino Bastida, Iván; Moreno-Noguer, Francesc; Solà Ortega, Joan; Sanfeliu Cortés, Alberto; Andrade-Cetto, Juan
|
Otros autores:
|
Institut de Robòtica i Informàtica Industrial; Universitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial; Universitat Politècnica de Catalunya. VIS - Visió Artificial i Sistemes Intel.ligents; Universitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI |
Abstract:
|
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. |
Abstract:
|
Vehicle detection and tracking is a core ingredient for developing autonomous driving applications in urban scenarios. Recent image-based Deep Learning (DL) techniques are obtaining breakthrough results in these perceptive tasks. However, DL research has not yet advanced much towards processing 3D point clouds from lidar range-finders. These sensors are very common in autonomous vehicles since, despite not providing as semantically rich information as images, their performance is more robust under harsh weather conditions than vision sensors. In this paper we present a full vehicle detection and tracking system that works with 3D lidar information only. Our detection step uses a Convolutional Neural Network (CNN) that receives as input a featured representation of the 3D information provided by a Velodyne HDL-64 sensor and returns a per-point classification of whether it belongs to a vehicle or not. The classified point cloud is then geometrically processed to generate observations for a multi-object tracking system implemented via a number of Multi-Hypothesis Extended Kalman Filters (MH-EKF) that estimate the position and velocity of the surrounding vehicles. The system is thoroughly evaluated on the KITTI tracking dataset, and we show the performance boost provided by our CNN-based vehicle detector over a standard geometric approach. Our lidar-based approach uses about a 4% of the data needed for an image-based detector with similarly competitive results. |
Abstract:
|
Peer Reviewed |
Materia(s):
|
-Àrees temàtiques de la UPC::Informàtica::Automàtica i control -object detection -pattern classification -pattern recognition -vehicle detection -lidar -vehicle tracking -deep learning -Classificació INSPEC::Pattern recognition |
Derechos:
|
Attribution-NonCommercial-NoDerivs 3.0 Spain
http://creativecommons.org/licenses/by-nc-nd/3.0/es/ |
Tipo de documento:
|
Artículo - Versión presentada Objeto de conferencia |
Editor:
|
IEEE Press
|
Compartir:
|
|