Effective Training and Inference Strategies for Point Classification in LiDAR Scenes

Fecha de publicación

2024-10-15T07:56:40Z

2024-10-15T07:56:40Z

2024-06-13

2024-10-15T07:56:40Z

Resumen

Light Detection and Ranging systems serve as robust tools for creating three-dimensional representations of the Earth’s surface. These representations are known as point clouds. Point cloud scene segmentation is essential in a range of applications aimed at understanding the environment, such as infrastructure planning and monitoring. However, automating this process can result in notable challenges due to variable point density across scenes, ambiguous object shapes, and substantial class imbalances. Consequently, manual intervention remains prevalent in point classification, allowing researchers to address these complexities. In this work, we study the elements contributing to the automatic semantic segmentation process with deep learning, conducting empirical evaluations on a self-captured dataset by a hybrid airborne laser scanning sensor combined with two nadir cameras in RGB and near-infrared over a 247 km2 terrain characterized by hilly topography, urban areas, and dense forest cover. Our findings emphasize the importance of employing appropriate training and inference strategies to achieve accurate classification of data points across all categories. The proposed methodology not only facilitates the segmentation of varying size point clouds but also yields a significant performance improvement compared to preceding methodologies, achieving a mIoU of 94.24% on our self-captured dataset.

Tipo de documento

Artículo


Versión publicada

Lengua

Inglés

Publicado por

MDPI

Documentos relacionados

Reproducció del document publicat a: https://doi.org/10.3390/rs16122153

Remote Sensing, 2024, vol. 16, num.12

https://doi.org/10.3390/rs16122153

Citación recomendada

Esta citación se ha generado automáticamente.

Derechos

cc-by (c) Caros Mariona et al., 2024

http://creativecommons.org/licenses/by/4.0/

Este ítem aparece en la(s) siguiente(s) colección(ones)