The social phenomenon we broadly call ‘disinformation’ has multiple faces. The most common is fake news, a term as old as information itself, which monarchies and religious authorities were already concerned about and pursuing in the 16th century. The globalization of news dissemination—primarily due to the invention of the World Wide Web in the 1990s, followed by the popularization of social media (today, a primary source of infor-mation for citizens, networks and platforms often reluctant to provide authorship or provenance data for what they publish)—and finally, the widespread use of artificial intelligence (starting in 2022, as an article by Wachter, Mittelstadt, and Russell states, it is doubtful that LLM systems have any obligation to tell the truth)1 have exponentially increased exposure to false, biased, or malicious content. The legal response to this is neither sim-ple nor internationally harmonized.
It is a result of the ongoing research projects Newsnet: “Impact of artificial intelligence and algorithms on online media, journalists and audiences” (PID2022-138391OB-I00) and “Automated counter narratives against misinformation and hate speech for journalists and social media” (TED2021-130810B-C22), funded by the Ministry of Science, Innovation and Universities of Spain and the European Commission NextGeneration EU/PRTR.
Report
Published version
English
Universitat Pompeu Fabra
info:eu-repo/grantAgreement/ES/3PE/PID2022-138391OB-I00
© Javier Díaz-Noci. Diciembre, 2025. All rights reserved with the author. This work is distributed under this Creative Commons license CC BY-NC-ND 4.0
https://creativecommons.org/licenses/by-nc-nd/4.0/