Organ Segmentation in Poultry Viscera Using RGB-D

Fecha de publicación

2018-06-14T11:46:52Z

2018-06-14T11:46:52Z

2018-01-03

2018-06-14T11:46:53Z

Resumen

We present a pattern recognition framework for semantic segmentation of visual structures, that is, multi-class labelling at pixel level, and apply it to the task of segmenting organs in the eviscerated viscera from slaughtered poultry in RGB-D images. This is a step towards replacing the current strenuous manual inspection at poultry processing plants. Features are extracted from feature maps such as activation maps from a convolutional neural network (CNN). A random forest classifier assigns class probabilities, which are further refined by utilizing context in a conditional random field. The presented method is compatible with both 2D and 3D features, which allows us to explore the value of adding 3D and CNN-derived features. The dataset consists of 604 RGB-D images showing 151 unique sets of eviscerated viscera from four different perspectives. A mean Jaccard index of 78.11% is achieved across the four classes of organs by using features derived from 2D, 3D and a CNN, compared to 74.28% using only basic 2D image features.

Tipo de documento

Artículo


Versión publicada

Lengua

Inglés

Publicado por

MDPI

Documentos relacionados

Reproducció del document publicat a: https://doi.org/10.3390/s18010117

Sensors, 2018, vol. 18(1), num. 117

https://doi.org/10.3390/s18010117

Citación recomendada

Esta citación se ha generado automáticamente.

Derechos

cc-by (c) Philipsen, Mark Philip et al., 2018

http://creativecommons.org/licenses/by/3.0/es

Este ítem aparece en la(s) siguiente(s) colección(ones)