Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Layer by Layer: Assessing AI Diagnostic Accuracy With Incremental Case Information in Neuroradiology
0
Zitationen
4
Autoren
2025
Jahr
Abstract
Aim Artificial intelligence (AI) has proven tremendous potential in improving diagnostic accuracy and efficiency in radiology. This study assesses the diagnostic performance of Google Gemini (version 1.5 Flash; Google DeepMind, Mountain View, California, USA), a proprietary large language model, in interpreting challenging diagnostic cases from the <i>American Journal of Neuroradiology's (AJNR)</i> "Case of the Month" series. Materials and methods We analyzed 143 neuroradiology cases spanning brain, head and neck, and spine areas. Each case evolved over four weeks, starting with clinical history and followed by incremental imaging findings. Google Gemini was often prompted with the question, "What is the diagnosis?" Its accuracy was assessed at each level and across specialty categories. The data used were publicly available, and no ethical approval was necessary. Results Gemini's diagnosis accuracy improved with new case data, from 3.5% with history alone to 45.7% after complete imaging was supplied. Accuracy by category was highest in spine cases (51.9%), followed by head and neck (45.5%) and brain (44.0%). A chi-square test for trend verified that the performance increase over time was statistically significant (p < 0.0000000001). Conclusion Google Gemini displays moderate diagnosis accuracy that improves with accumulated information. While encouraging, its shortcomings underline the necessity for continual validation and transparency. This study shows the expanding relevance of AI in neuroradiology and the necessity of comprehensive evaluation before clinical integration.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.