Uncertainty-based Rejection Wrappers for Black-box Classifiers

Fecha de publicación

2020-07-14T07:18:54Z

2020-07-14T07:18:54Z

2020-05-21

2020-07-14T07:18:55Z

Resumen

Machine Learning as a Service platform is a very sensible choice for practitioners that wantto incorporate machine learning to their products while reducing times and costs. However, to benefit theiradvantages, a method for assessing their performance when applied to a target application is needed. In thiswork, we present a robust uncertainty-based method for evaluating the performance of both probabilistic andcategorical classification black-box models, in particular APIs, that enriches the predictions obtained withan uncertainty score. This uncertainty score enables the detection of inputs with very confident but erroneouspredictions while protecting against out of distribution data points when deploying the model in a productivesetting. We validate the proposal in different natural language processing and computer vision scenarios.Moreover, taking advantage of the computed uncertainty score, we show that one can significantly increasethe robustness and performance of the resulting classification system by rejecting uncertain predictions

Tipo de documento

Artículo


Versión publicada

Lengua

Inglés

Publicado por

Institute of Electrical and Electronics Engineers (IEEE)

Documentos relacionados

Reproducció del document publicat a: https://doi.org/10.1109/ACCESS.2020.2996495

IEEE Access, 2020, vol. 8, p. 101721-101746

https://doi.org/10.1109/ACCESS.2020.2996495

Citación recomendada

Esta citación se ha generado automáticamente.

Derechos

cc-by (c) Mena, José et al., 2020

http://creativecommons.org/licenses/by/3.0/es

Este ítem aparece en la(s) siguiente(s) colección(ones)