The cost of AI-driven accidents. A law and economics analysis of liability rules in the age of self-driving cars

Antonella Zarra (Hamburg University)

Abstract

Despite the exponential growth in the development of automated driving systems in the past ten years, currently deployed automated vehicles (AVs) are far from being fully autonomous. As a matter of fact, human intervention is still necessary in most circumstances to take final decisions or to avoid system failures. International standardization bodies classify AVs according to their degree of autonomy, with Level 2 and 3 vehicles being semi-autonomous, namely requiring a safety driver to relinquish control if needed. When an accident occurs, the degree of interaction between human beings and machines and the control that the safety driver can exert on the AV bring about significant consequences for the attribution of liability. While fully autonomous AVs (Level 5) will most likely be subject to a strict liability regime, it is not clear yet which liability rule would be more suitable for semi-autonomous AVs (Level 2 and 3), which may induce over-reliance on the technology, resulting in an increased level of negligence by the operator. Evidence from other sectors (e.g. aviation) that have already witnessed a shift to full automation suggest that human operators might become the “moral crumple zone” (Elish, 2019) of accidents involving automated vehicles, being consistently blamed for negligence even in cases where their control on the machine is limited.
Against this backdrop, this paper evaluates whether the existing tort system is well equipped to tackle the new challenges brought about by semi-autonomous AVs, or whether novel rules should be designed to ensure adequate levels of safety without stifling innovation. This paper applies the law and economics analytical framework, and in particular the economic theory of torts, to semi-autonomous AVs. The contribution first surveys existing liability frameworks applicable to semi-autonomous AVs and then reflects on the hypothesis of attributing legal personality to AI systems. It argues that AVs will be likely to require more compliance efforts from both the manufacturer and the end-user in a semi-autonomy scenario, that is to say Level 2 and 3 AVs are compliance-using technologies. As a consequence, it argues that the role if the “human in the loop” should not be disregarded when analyzing the investment in precautions by potential tortfeasors and victims. Furthermore, it contends that the type of liability regime (from strict liability to negligence) is shaped by how lawmakers conceive the AVs in the first place. In this respect, it is debatable whether the legal personality hypothesis would be effective in a tort setting. While this legal fiction works for corporations, in case of AVs it would not relieve the manufacturer or the owner from the potential disbursements for damages and insurance, representing an additional layer of complexity without solving the liability dilemma. Regulators should envisage technology specific mechanisms for AI-driven accidents where human negligence persists, which would incentivize the adoption of adequate levels of precautions without discouraging firms’ investments in innovation. An integrated approach involving both ex ante regulation and ex post tort law could help achieve an efficient system which would allow to tackle the challenges brought by AVs.

Download the file

©2023 Italian Society of Law and Economics. All rights reserved.