dc.contributor.author
De Chiara, Alessandro
dc.contributor.author
Manna, Ester
dc.contributor.author
Singh, Shubhranshu
dc.date.accessioned
2025-12-09T23:49:52Z
dc.date.available
2025-12-09T23:49:52Z
dc.date.issued
2025-12-09T11:00:42Z
dc.date.issued
2025-12-09T11:00:42Z
dc.identifier
https://hdl.handle.net/2445/224749
dc.identifier.uri
http://hdl.handle.net/2445/224749
dc.description.abstract
We theoretically investigate whether AI developers or AI operators should be liable for the harm the AI systems may cause when they hallucinate. We find that the optimal liability framework may vary over time, with the evolution of the AI technology, and that making the AI operators liable can be desirable only if it induces monitoring of the AI systems. We also highlight non-trivial relationships between welfare and reputational concerns, human supervision ability, and the accuracy of the technology. Our results have implications for regulatory design and business strategies.
dc.format
application/pdf
dc.relation
UB Economics – Working Papers, 2025 E25/492
dc.relation
[WP E-Eco25/492]
dc.rights
cc-by-nc-nd, (c) De Chiara et al., 2025
dc.rights
http://creativecommons.org/licenses/by-nc-nd/4.0/
dc.rights
info:eu-repo/semantics/openAccess
dc.subject
Intel·ligència artificial
dc.subject
Teoria d'operadors
dc.subject
Disseny de sistemes
dc.subject
Artificial intelligence
dc.subject
Operator theory
dc.title
Mitigating Generative AI Hallucinations
dc.type
info:eu-repo/semantics/workingPaper