A Conversational Agent that Learns to be Aligned with the Moral Value of Respect"

Publication date

2025-02-03T08:53:55Z

2025-02-03T08:53:55Z

2025-01-31

2025-02-03T08:53:55Z

Abstract

Videogame developers typically conduct user experience surveys to gather feedback from users once they have played. Nevertheless, as users may not recall all the details once finished, we propose an ethical conversational agent that respectfully conducts the survey during gameplay. To achieve this without hindering user’s engagement, we resort to reinforcement learning and an ethical embedding algorithm. Specifically, we transform the learning environment so that it guarantees that the agent learns to be respectful (i.e. aligned with the moral value of respect) while pursuing its individual objective of eliciting as much feedback information as possible. When applying this approach to a simple videogame, our comparative tests between the two agents (ethical and unethical) empirically demonstrate that endowing a survey-oriented conversational agent with this moral value of respect avoids disturbing user’s engagement while still pursuing its individual objective, which is to gather as much information as possible.

Document Type

Article


Published version

Language

English

Publisher

IOS Press

Related items

Reproducció del document publicat a: https://doi.org/10.1177/30504554241311168

AI Communications, 2025

https://doi.org/10.1177/30504554241311168

Recommended citation

This citation was generated automatically.

Rights

cc by-nc (c) Eric Roselló Marín et al., 2025

http://creativecommons.org/licenses/by-nc/3.0/es/

This item appears in the following Collection(s)