OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 21.03.2026, 20:47

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Defining operational safety in clinical artificial intelligence systems

2026·0 Zitationen·npj Digital MedicineOpen Access
Volltext beim Verlag öffnen

0

Zitationen

7

Autoren

2026

Jahr

Abstract

The clinical adoption of artificial intelligence (AI) has focused on enabling automation, but conventional accuracy metrics fail to answer a key question: when is it safe to trust an AI system? We introduce the Safety-Aware Receiver Operating Characteristic (SA-ROC) framework, which defines operational safety as an ability to meet pre-specified reliability levels. The SA-ROC curve delineates a Rule-in and a Rule-out Safe Zone, where autonomous action is permitted, and a Gray Zone, where human review is mandated. To quantify this non-automated workload, we introduce the Gray Zone Area (Γ<sub>Area</sub>), a metric measuring the operational cost of indecision. Our framework reveals a key reversal: in a case study of two FDA-cleared algorithms for cancer screening, the model with a statistically superior AUC was found to be operationally less safe for high-confidence screening. SA-ROC enables active governance, translating clinical policy into optimized workflows that inform operational safety and complement regulatory safety evaluation.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Adversarial Robustness in Machine Learning
Volltext beim Verlag öffnen