Risk mitigation in algorithmic accountability: The role of machine learning copies

Other authors

Universitat Ramon Llull. Esade

Publication date

2020



Abstract

Machine learning plays an increasingly important role in our society and economy and is already having an impact on our daily life in many different ways. From several perspectives, machine learning is seen as the new engine of productivity and economic growth. It can increase the business efficiency and improve any decision-making process, and of course, spawn the creation of new products and services by using complex machine learning algorithms. In this scenario, the lack of actionable accountability-related guidance is potentially the single most important challenge facing the machine learning community. Machine learning systems are often composed of many parts and ingredients, mixing third party components or software-as-a-service APIs, among others. In this paper we study the role of copies for risk mitigation in such machine learning systems. Formally, a copy can be regarded as an approximated projection operator of a model into a target model hypothesis set. Under the conceptual framework of actionable accountability, we explore the use of copies as a viable alternative in circumstances where models cannot be re-trained, nor enhanced by means of a wrapper. We use a real residential mortgage default dataset as a use case to illustrate the feasibility of this approach.

Document Type

Article

Document version

Published version

Language

English

Subjects and keywords

Machine learning systems

Pages

26 p.

Publisher

Public Library of Science

Published in

PLOS One

Recommended citation

This citation was generated automatically.

Rights

© L'autor/a

© L'autor/a

Attribution 4.0 International

This item appears in the following Collection(s)

Esade [293]