Multi-Objective Reinforcement Learning for Designing Ethical Multi-Agent Environments

Data de publicació

2025-11-13T11:46:57Z

2025-11-13T11:46:57Z

2023-08-23

2025-11-13T11:46:57Z



Resum

This paper tackles the open problem of value alignment in multi-agent systems. In particular, we propose an approach to build an ethical environment that guarantees that agents in the system learn a joint ethically-aligned behaviour while pursuing their respective individual objectives. Our contributions are founded in the framework of Multi-Objective Multi-Agent Reinforcement Learning. Firstly, we characterise a family of Multi-Objective Markov Games (MOMGs), the socalled ethical MOMGs, for which we can formally guarantee the learning of ethical behaviours. Secondly, based on our characterisation we specify the process for building single-objective ethical environments that simplify the learning in the multi-agent system. We illustrate our process with an ethical variation of the Gathering Game, where agents manage to compensate social inequalities by learning to behave in alignment with the moral value of beneficence.

Tipus de document

Article


Versió publicada

Llengua

Anglès

Publicat per

Springer Verlag

Documents relacionats

Reproducció del document publicat a: https://doi.org/10.1007/s00521-023-08898-y

Neural Computing & Applications, 2023, vol. 37, p. 25619-25644

https://doi.org/10.1007/s00521-023-08898-y

Citació recomanada

Aquesta citació s'ha generat automàticament.

Drets

cc by (c) Manel Rodríguez Soto, 2023

http://creativecommons.org/licenses/by/3.0/es/

Aquest element apareix en la col·lecció o col·leccions següent(s)