Instilling moral value alignment by means of multi-objective reinforcement learning

Fecha de publicación

2023-02-01T09:10:35Z

2023-02-01T09:10:35Z

2022-01-24

2023-02-01T09:10:35Z

Resumen

AI research is being challenged with ensuring that autonomous agents learn to behave ethically, namely in alignment with moral values. Here, we propose a novel way of tackling the value alignment problem as a two-step process. The first step consists on formalising moral values and value aligned behaviour based on philosophical foundations. Our formalisation is compatible with the framework of (Multi-Objective) Reinforcement Learning, to ease the handling of an agent's individual and ethical objectives. The second step consists in designing an environment wherein an agent learns to behave ethically while pursuing its individual objective. We leverage on our theoretical results to introduce an algorithm that automates our two-step approach. In the cases where value-aligned behaviour is possible, our algorithm produces a learning environment for the agent wherein it will learn a value-aligned behaviour.

Tipo de documento

Artículo


Versión publicada

Lengua

Inglés

Publicado por

Springer

Documentos relacionados

Reproducció del document publicat a: https://doi.org/10.1007/s10676-022-09635-0

Ethics And Information Technology, 2022, vol. 24

https://doi.org/10.1007/s10676-022-09635-0

Citación recomendada

Esta citación se ha generado automáticamente.

Derechos

cc by (c) Manel Rodríguez Soto et al., 2022

http://creativecommons.org/licenses/by/3.0/es/

Este ítem aparece en la(s) siguiente(s) colección(ones)