A Conversational Agent that Learns to be Aligned with the Moral Value of Respect"

Data de publicació

2025-02-03T08:53:55Z

2025-02-03T08:53:55Z

2025-01-31

2025-02-03T08:53:55Z

Resum

Videogame developers typically conduct user experience surveys to gather feedback from users once they have played. Nevertheless, as users may not recall all the details once finished, we propose an ethical conversational agent that respectfully conducts the survey during gameplay. To achieve this without hindering user’s engagement, we resort to reinforcement learning and an ethical embedding algorithm. Specifically, we transform the learning environment so that it guarantees that the agent learns to be respectful (i.e. aligned with the moral value of respect) while pursuing its individual objective of eliciting as much feedback information as possible. When applying this approach to a simple videogame, our comparative tests between the two agents (ethical and unethical) empirically demonstrate that endowing a survey-oriented conversational agent with this moral value of respect avoids disturbing user’s engagement while still pursuing its individual objective, which is to gather as much information as possible.

Tipus de document

Article


Versió publicada

Llengua

Anglès

Publicat per

IOS Press

Documents relacionats

Reproducció del document publicat a: https://doi.org/10.1177/30504554241311168

AI Communications, 2025

https://doi.org/10.1177/30504554241311168

Citació recomanada

Aquesta citació s'ha generat automàticament.

Drets

cc by-nc (c) Eric Roselló Marín et al., 2025

http://creativecommons.org/licenses/by-nc/3.0/es/

Aquest element apareix en la col·lecció o col·leccions següent(s)