Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Defining operational safety in clinical artificial intelligence systems
0
Zitationen
7
Autoren
2026
Jahr
Abstract
The clinical adoption of artificial intelligence (AI) has focused on enabling automation, but conventional accuracy metrics fail to answer a key question: when is it safe to trust an AI system? We introduce the Safety-Aware Receiver Operating Characteristic (SA-ROC) framework, which defines operational safety as an ability to meet pre-specified reliability levels. The SA-ROC curve delineates a Rule-in and a Rule-out Safe Zone, where autonomous action is permitted, and a Gray Zone, where human review is mandated. To quantify this non-automated workload, we introduce the Gray Zone Area (Γ<sub>Area</sub>), a metric measuring the operational cost of indecision. Our framework reveals a key reversal: in a case study of two FDA-cleared algorithms for cancer screening, the model with a statistically superior AUC was found to be operationally less safe for high-confidence screening. SA-ROC enables active governance, translating clinical policy into optimized workflows that inform operational safety and complement regulatory safety evaluation.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.