Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Development and retrospective validation of SCOUT: scalable clinical oversight of large language models via uncertainty triangulation
0
Zitationen
26
Autoren
2026
Jahr
Abstract
Abstract Large language models (LLMs) are increasingly used in clinical workflows, yet requiring clinician review of every AI output negates the efficiency gains that motivate their adoption. We present SCOUT (Scalable Clinical Oversight via Uncertainty Triangulation), a model-agnostic meta-verification framework that selectively defers unreliable LLM predictions to clinicians by triangulating three orthogonal signals: model heterogeneity, stochastic inconsistency, and reasoning critique. In this retrospective development and validation study, we derived the framework on a discovery cohort (n = 405) and validated it across three clinically distinct tasks using 4 independent retrospective cohorts: coronary heart disease subtyping (n = 2,271), liver cancer screening from radiology reports (n = 3,373), and diseased coronary vessel counting (n = 286). SCOUT reduced the volume of cases requiring human review by 45% to 83%, with projected final accuracy of 99.1% to 100.0% assuming expert correction of all flagged cases. SCOUT provides a scalable, retrospectively validated approach for deploying generative AI in clinical medicine without compromising patient safety. Prospective randomized validation is underway to confirm real-world clinical utility.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.
Autoren
Institutionen
- Chinese Academy of Medical Sciences & Peking Union Medical College(CN)
- Fu Wai Hospital(CN)
- Peking University(CN)
- Peking University First Hospital(CN)
- University of Oxford(GB)
- Peking Union Medical College Hospital(CN)
- University of Science and Technology of China(CN)
- Wuxi Fourth People's Hospital(CN)
- Wuxi People's Hospital(CN)
- Hebei Medical University(CN)
- First Affiliated Hospital of Hebei Medical University(CN)
- Northern Jiangsu People's Hospital(CN)
- Jiangsu Province Hospital(CN)
- Nanjing Medical University(CN)
- First Affiliated Hospital of Xi'an Jiaotong University(CN)