OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.03.2026, 14:52

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Responsibility Gaps and Black Box Healthcare AI: Shared Responsibilization as a Solution

2023·39 Zitationen·Digital SocietyOpen Access
Volltext beim Verlag öffnen

39

Zitationen

3

Autoren

2023

Jahr

Abstract

As sophisticated artificial intelligence software becomes more ubiquitously and more intimately integrated within domains of traditionally human endeavor, many are raising questions over how responsibility (be it moral, legal, or causal) can be understood for an AI's actions or influence on an outcome. So called "responsibility gaps" occur whenever there exists an apparent chasm in the ordinary attribution of moral blame or responsibility when an AI automates physical or cognitive labor otherwise performed by human beings and commits an error. Healthcare administration is an industry ripe for responsibility gaps produced by these kinds of AI. The moral stakes of healthcare are often life and death, and the demand for reducing clinical uncertainty while standardizing care incentivizes the development and integration of AI diagnosticians and prognosticators. In this paper, we argue that (1) responsibility gaps <i>are</i> generated by "black box" healthcare AI, (2) the presence of responsibility gaps (if unaddressed) creates serious moral problems, (3) a suitable solution is for relevant stakeholders to voluntarily <i>responsibilize</i> the gaps, taking on some moral responsibility for things they are not, strictly speaking, blameworthy for, and (4) should this solution be taken, black box healthcare AI will be permissible in the provision of healthcare.

Ähnliche Arbeiten