Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial intelligence-assisted reader evaluation in acute CT head interpretation (AI-REACT): a multireader multicase study
0
Zitationen
19
Autoren
2026
Jahr
Abstract
Objective To assess whether an artificial intelligence (AI) tool improves the accuracy, speed and confidence of general radiologists, emergency clinicians and radiographers in detecting critical non-contrast CT head (NCCTH) abnormalities and to evaluate its stand-alone performance and factors influencing diagnostic accuracy. Methods and analysis A retrospective dataset of 150 NCCTH (52 normal and 98 with critical abnormalities) was reviewed by 30 readers (10 radiologists, 15 emergency clinicians and 5 radiographers) from four National Health Service trusts. Each interpreted scan is performed unaided and then with the qER EU 2.0 AI tool, separated by a 2-week washout period. Ground truth was established by two neuroradiologists. We measured the AI’s stand-alone performance and its effect on reader accuracy, confidence and speed. Results The qER algorithm showed strong diagnostic performance (area under the receiver operator curve 0.821–0.976). With AI, pooled reader sensitivity for critical abnormalities increased from 82.8% to 89.7% (+6.9%, p<0.001) and for intracranial haemorrhage from 84.6% to 91.6% (+7.0%, p<0.001), while specificity decreased from 84.5% to 78.9% (–5.5%, p=0.046). Reader confidence did not change significantly. Emergency department (ED) clinicians with AI achieved sensitivity similar to unaided radiologists. Conclusion AI assistance increased sensitivity for detecting critical abnormalities on NCCTH but reduced specificity. AI-enabled ED clinicians to achieve diagnostic sensitivity comparable to radiologists, supporting its potential to enhance non-radiologist performance. Further studies are needed to confirm these findings in clinical practice. Trial registration number NCT06018545 .
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.
Autoren
Institutionen
- Oxford Health NHS Foundation Trust(GB)
- Oxford University Hospitals NHS Trust(GB)
- Qarshi University(PK)
- CE Technologies (United Kingdom)(GB)
- University of Oxford(GB)
- Cambridge University Hospitals NHS Foundation Trust(GB)
- Guy's and St Thomas' NHS Foundation Trust(GB)
- Northumbria Healthcare NHS Foundation Trust(GB)
- Northumbria Specialist Emergency Care Hospital(GB)
- University of Derby(GB)
- Canterbury Christ Church University(GB)
- University College London Hospitals NHS Foundation Trust(GB)
- University College London(GB)
- NHS Greater Glasgow and Clyde(GB)
- University of Glasgow(GB)