Fecha de publicación

2025-07-03



Resumen

As social media and Artificial Intelligence (AI)-driven systems become more embedded in human interactions, misinformation and manipulation pose serious concerns. From fake news and online scams to erroneous AI-generated content, users are increasingly vulnerable to being misled—whether by people or automated systems, such as chatbots —underscoring the urgent need for methods to verify manipulation in human-agent interactions. In this talk, I will present our recent results in the formal verification of Human-Agent Interactions. Our approach is based on a class of formal dialogues called goal-hiding information-seeking dialogues. On the top of this class of dialogues, we have defined a logic to verify manipulation. Through examples, I will illustrate how our approach can detect manipulation.

Tipo de documento

Conference report

Lengua

Inglés

Citación recomendada

Esta citación se ha generado automáticamente.

Derechos

http://creativecommons.org/licenses/by-nc-nd/4.0/

Open Access

Este ítem aparece en la(s) siguiente(s) colección(ones)

Congressos [11156]