Abstract
Explainability, the capability of an artificial intelligence system (AIS)
to explain its outcomes in a manner that is comprehensible to human
beings at an acceptable level, has been deemed essential for critical
sectors, such as healthcare. Is it really the case? In this perspective, we consider two extreme cases, “Oracle” (without explainability)
versus “AI Colleague” (with explainability) for a thorough analysis.
We discuss how the level of automation and explainability of AIS
can affect the determination of liability among the medical practitioner/facility and manufacturer of AIS. We argue that explainability
plays a crucial role in setting a responsibility framework in healthcare, from a legal standpoint, to shape the behavior of all involved
parties and mitigate the risk of potential defensive medicine practices.
to explain its outcomes in a manner that is comprehensible to human
beings at an acceptable level, has been deemed essential for critical
sectors, such as healthcare. Is it really the case? In this perspective, we consider two extreme cases, “Oracle” (without explainability)
versus “AI Colleague” (with explainability) for a thorough analysis.
We discuss how the level of automation and explainability of AIS
can affect the determination of liability among the medical practitioner/facility and manufacturer of AIS. We argue that explainability
plays a crucial role in setting a responsibility framework in healthcare, from a legal standpoint, to shape the behavior of all involved
parties and mitigate the risk of potential defensive medicine practices.