As social media and Artificial Intelligence (AI)-driven systems become more embedded in human interactions, misinformation and manipulation pose serious concerns. From fake news and online scams to erroneous AI-generated content, users are increasingly vulnerable to being misled—whether by people or automated systems, such as chatbots —underscoring the urgent need for methods to verify manipulation in human-agent interactions. In this talk, I will present our recent results in the formal verification of Human-Agent Interactions. Our approach is based on a class of formal dialogues called goal-hiding information-seeking dialogues. On the top of this class of dialogues, we have defined a logic to verify manipulation. Through examples, I will illustrate how our approach can detect manipulation.
Conference report
English
Àrees temàtiques de la UPC::Informàtica::Arquitectura de computadors; High performance computing; Càlcul intensiu (Informàtica)
http://creativecommons.org/licenses/by-nc-nd/4.0/
Open Access
Congressos [11156]