Publication date

2025-07-03



Abstract

As social media and Artificial Intelligence (AI)-driven systems become more embedded in human interactions, misinformation and manipulation pose serious concerns. From fake news and online scams to erroneous AI-generated content, users are increasingly vulnerable to being misled—whether by people or automated systems, such as chatbots —underscoring the urgent need for methods to verify manipulation in human-agent interactions. In this talk, I will present our recent results in the formal verification of Human-Agent Interactions. Our approach is based on a class of formal dialogues called goal-hiding information-seeking dialogues. On the top of this class of dialogues, we have defined a logic to verify manipulation. Through examples, I will illustrate how our approach can detect manipulation.

Document Type

Conference report

Language

English

Recommended citation

This citation was generated automatically.

Rights

http://creativecommons.org/licenses/by-nc-nd/4.0/

Open Access

This item appears in the following Collection(s)

Congressos [11156]