Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Neurosurgical Uncertainty Index: Self-Doubting AI for rare or unexpected surgical complications
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Rare or unexpected postoperative neurosurgical complications pose a challenge due to clinical variability and gaps in available data. We introduce the Neurosurgical Uncertainty Index (NUI), an uncertainty-aware AI framework that integrates bootstrap sampling for aleatoric uncertainty, isolation forest anomaly detection, and clinical calibration to predict and stratify risks for 13 complications. The NUI distinguishes between data-driven and model-driven uncertainty and highlights cases that conventional models often miss. In a cohort of 80 patients, the hybrid Rare Event Score (anomaly x uncertainty) achieved critical risk stratification with an AUROC of 0.92 (95% CI: 0.85 - 0.97) for complications requiring intervention, demonstrating 89% precision for critical cases (Score ≥0.8). Entropy thresholds (>1.5 nats) flagged 18% of predictions for review, preventing three overconfidence errors. Interpretable risk tiers are designed to integrate seamlessly with clinical workflows. By merging machine learning, neurosurgery, and epistemology, the NUI promotes AI that acknowledges its limitations, aiming for safer surgery.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.493 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.377 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.835 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.555 Zit.