Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Enhancing Radiologist Efficiency with AI: A Multi-Reader Multi-Case Study on Aortic Dissection Detection and Prioritisation
0
Zitationen
11
Autoren
2024
Jahr
Abstract
Background and Objectives: Acute aortic dissection (AD) is a life-threatening condition and early detection can significantly improve patient outcomes and survival. This study evaluates the clinical benefits of integrating a deep-learning (DL)-based application for automated detection and prioritisation of AD on chest CT angiographies (CTAs), focusing on the reduction of scan-to-assessment (STAT) and interpretation times (IT). Materials and Methods: This retrospective Multi-Reader, Multi-Case (MRMC) study compared AD detection with and without artificial intelligence-(AI) assistance. Ground truth was established by two U.S. board-certified radiologists, while three additional expert radiologists participated as readers. All participants assessed the same CTAs without AI-assistance (pre-AI arm) and, after a 1-month washout period, with the help of the device outputs (post-AI arm). STAT and IT were compared between the two phases. Results: The study included 285 CTAs (95 per reader, per arm), with a mean patient age of 58.5 years ±14.7(SD), 52% men, 37% prevalence. AI assistance significantly reduced STAT for detecting 33 true positive AD cases, from 15.84 minutes (95% CI: 13.37–18.31min) without AI to 5.07 minutes (95% CI: 4.23–5.91min) with AI, a 68% reduction (p<0.01). IT also decreased significantly, from 21.22 seconds (95% CI: 19.87–22.58s) without AI to 14.17 seconds (95% CI: 13.39–14.95s) with AI (p<0.05). Conclusions: Integrating a DL-based algorithm for AD detection on chest CTAs significantly reduces both STAT and IT. By prioritising the most urgent cases, AI ensures faster diagnosis and improves the workflow efficiency in clinical radiology practice, compared to standard First-In First-Out (FIFO) workflow.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.324 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.189 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.588 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.470 Zit.