Risk mitigation in algorithmic accountability: The role of machine learning copies

Fecha de publicación

2020-12-03T11:03:24Z

2020-12-03T11:03:24Z

2020-11-03

2020-12-03T11:03:25Z

Resumen

Machine learning plays an increasingly important role in our society and economy and is already having an impact on our daily life in many different ways. From several perspectives, machine learning is seen as the new engine of productivity and economic growth. It can increase the business efficiency and improve any decision-making process, and of course, spawn the creation of new products and services by using complex machine learning algorithms. In this scenario, the lack of actionable accountability-related guidance is potentially the single most important challenge facing the machine learning community. Machine learning systems are often composed of many parts and ingredients, mixing third party components or software-as-a-service APIs, among others. In this paper we study the role of copies for risk mitigation in such machine learning systems. Formally, a copy can be regarded as an approximated projection operator of a model into a target model hypothesis set. Under the conceptual framework of actionable accountability, we explore the use of copies as a viable alternative in circumstances where models cannot be re-trained, nor enhanced by means of a wrapper. We use a real residential mortgage default dataset as a use case to illustrate the feasibility of this approach.

Tipo de documento

Artículo


Versión publicada

Lengua

Inglés

Publicado por

Public Library of Science (PLoS)

Documentos relacionados

Reproducció del document publicat a: https://doi.org/10.1371/journal.pone.0241286

PLoS One, 2020, num. 0241286

https://doi.org/10.1371/journal.pone.0241286

Citación recomendada

Esta citación se ha generado automáticamente.

Derechos

cc-by (c) Unceta, Irene et al., 2020

http://creativecommons.org/licenses/by/3.0/es

Este ítem aparece en la(s) siguiente(s) colección(ones)