A Conversational Agent that Learns to be Aligned with the Moral Value of Respect"

Fecha de publicación

2025-02-03T08:53:55Z

2025-02-03T08:53:55Z

2025-01-31

2025-02-03T08:53:55Z

Resumen

Videogame developers typically conduct user experience surveys to gather feedback from users once they have played. Nevertheless, as users may not recall all the details once finished, we propose an ethical conversational agent that respectfully conducts the survey during gameplay. To achieve this without hindering user’s engagement, we resort to reinforcement learning and an ethical embedding algorithm. Specifically, we transform the learning environment so that it guarantees that the agent learns to be respectful (i.e. aligned with the moral value of respect) while pursuing its individual objective of eliciting as much feedback information as possible. When applying this approach to a simple videogame, our comparative tests between the two agents (ethical and unethical) empirically demonstrate that endowing a survey-oriented conversational agent with this moral value of respect avoids disturbing user’s engagement while still pursuing its individual objective, which is to gather as much information as possible.

Tipo de documento

Artículo


Versión publicada

Lengua

Inglés

Publicado por

IOS Press

Documentos relacionados

Reproducció del document publicat a: https://doi.org/10.1177/30504554241311168

AI Communications, 2025

https://doi.org/10.1177/30504554241311168

Citación recomendada

Esta citación se ha generado automáticamente.

Derechos

cc by-nc (c) Eric Roselló Marín et al., 2025

http://creativecommons.org/licenses/by-nc/3.0/es/

Este ítem aparece en la(s) siguiente(s) colección(ones)