Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The strange case of Dr Watson : liability implications of AI evidence-based decision support systems in health care
4
Zitationen
2
Autoren
2020
Jahr
Abstract
This paper investigates the legal issues emerging from the adoption of clinical decision support systems (CDSS) based on artificial intelligence (AI). We explore a set of questions whose answers may affect the allocation of liability in misdiagnosis and/or improper treatment scenarios. The characteristic features of new-generation CDSS based on AI raise new challenges. In particular, the argument is made that a new shared decision-making authority model shall be adopted, in line with the analysis of the task–responsibility allocation. It is also suggested that the level of automation should be taken into account in classifying these systems under the European regulations on medical device software. This classification may indeed affect not only the certification procedures but also the allocation of liability. To this end, we finally design some scenarios providing variations on the possible causes of failure in the decision-making process and the consequent liability assessment.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.402 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.507 Zit.