OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 10.04.2026, 02:08

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

When Is It Safe to Introduce an AI System Into Healthcare? A Practical Decision Algorithm for the Ethical Implementation of Black‐Box AI in Medicine

2025·2 Zitationen·BioethicsOpen Access
Volltext beim Verlag öffnen

2

Zitationen

3

Autoren

2025

Jahr

Abstract

There is mounting global interest in the revolutionary potential of AI tools. However, its use in healthcare carries certain risks. Some argue that opaque ('black box') AI systems in particular undermine patients' informed consent. While interpretable models offer an alternative, this approach may be impossible with generative AI and large language models (LLMs). Thus, we propose that AI tools should be evaluated for clinical use based on their implementation risk, rather than interpretability. We introduce a practical decision algorithm for the clinical implementation of black-box AI by evaluating its risk of implementation. Applied to the case of an LLM for surgical informed consent, we assess a system's implementation risk by evaluating: (1) technical robustness, (2) implementation feasibility and (3) analysis of harms and benefits. Accordingly, the system is categorised as minimal-risk (standard use), moderate-risk (innovative use) or high-risk (experimental use). Recommendations for implementation are proportional to risk, requiring more oversight for higher-risk categories. The algorithm also considers the system's cost-effectiveness and patients' informed consent.

Ähnliche Arbeiten