Abstract:
|
In an avoidable harmful situation, autonomous vehicles systems are expected to
choose the course of action that causes the less damage to everybody. However, this
behavioral protocol implies some predictability. In this context, we show that if the
autonomous vehicle decision process is perfectly known then malicious, opportunistic,
terrorist, criminal and non-civic individuals may have incentives to manipulate it.
Consequently, some levels of uncertainty are necessary for the system to be manipulation
proof. Uncertainty removes the misbehavior incentives because it increases
the risk and likelihood of unsuccessful manipulation. However, uncertainty may also
decrease the quality of the decision process with negative impact in terms of efficiency and welfare for the society. We also discuss other possible solutions to this problem.
Keywords: Artificial intelligence; Autonomous vehicles; Manipulation; Malicious
Behavior; Uncertainty.
JEL classification: D81, L62, O32. |