AI and Increased Medical Liability?

Alessandro Tacconelli (ETH Zurich)
Alexander Stremitzer (ETH Zurich)
Jakob Merane (ETH Zurich)
Aileen Nielsen (ETH Zurich)
Karol Jan Rutkowski (ETH Zurich)


An increasing number of AI systems are employed to provide ‘personalised’ medical recommendations, which could deviate from standard care. Scholars have already tried to investigate whether, in the US common-law system, tort-law provisions could undermine the use of potentially beneficial AI tools in the medical field.  We start from those findings, by replicating the results in Tobia et al. (2021), but we then extend our research to provide an answer to the question of whether the same results would hold in a different legal setting, like the one of a civil-law jurisdiction. To do so, we conduct an online experimental vignette study by recruiting three different samples: a nationally representative sample of US adults, a nationally representative sample of German adults, and a sample of German physicians. For each sample, we assign participants to one out of 4 different scenarios deriving from a 2x2 factorial design, in which we vary the described AI recommendation (standard or nonstandard care) and physician’s decision (to accept or reject that recommendation), keeping all else constant. We subsequently ask participants to assess the physician’s liability. Our results indicate that, in all three samples, physicians who receive advice from an AI system to provide standard care can reduce the risk of liability by accepting, rather than rejecting, that advice. However, when an AI system recommends nonstandard care, there is no similar shielding effect of rejecting that advice and so providing standard care. We derive a set of legally relevant implications for tort-law systems in both common- and civil-law countries.

Download the file

©2023 Italian Society of Law and Economics. All rights reserved.