On the relative value of weak information of supervision for learning generative models: An empirical study

Fecha de publicación

2022-09-12T09:39:21Z

2022-09-12T09:39:21Z

2022-11

2022-09-12T09:39:21Z

Resumen

Weakly supervised learning is aimed to learn predictive models from partially supervised data, an easy-to-collect alternative to the costly standard full supervision. During the last decade, the research community has striven to show that learning reliable models in specific weakly supervised problems is possible. We present an empirical study that analyzes the value of weak information of supervision throughout its entire spectrum, from none to full supervision. Its contribution is assessed under the realistic assumption that a small subset of fully supervised data is available. Particularized in the problem of learning with candidate sets, we adapt Cozman and Cohen [1] key study to learning from weakly supervised data. Standard learning techniques are used to infer generative models from this type of supervision with both synthetic and real data. Empirical results suggest that weakly labeled data is helpful in realistic scenarios, where fully labeled data is scarce, and its contribution is directly related to both the amount of information of supervision and how meaningful this information is.

Tipo de documento

Versión publicada


Artículo

Lengua

Inglés

Publicado por

Elsevier B.V.

Documentos relacionados

Reproducció del document publicat a: https://doi.org/10.1016/j.ijar.2022.08.012

International Journal of Approximate Reasoning, 2022, vol. 150, p. 258-272

https://doi.org/10.1016/j.ijar.2022.08.012

Citación recomendada

Esta citación se ha generado automáticamente.

Derechos

cc-by(c) Jerónimo Hernández-González et.al., 2022

http://creativecommons.org/licenses/by/4.0/es/

Este ítem aparece en la(s) siguiente(s) colección(ones)