Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial intelligence in radiology: Improving diagnostic accuracy in imaging techniques
0
Zitationen
2
Autoren
2025
Jahr
Abstract
The integration of artificial intelligence into radiological practice represents one of the most significant technological transformations in medical imaging history, with machine learning and deep learning algorithms demonstrating diagnostic performance approaching or exceeding human experts across multiple imaging modalities and clinical applications that could fundamentally reshape how imaging services are delivered throughout healthcare systems worldwide. This prospective observational research examined the impact of AI-assisted interpretation on diagnostic accuracy across multiple imaging modalities in Portuguese academic radiology departments, evaluating performance metrics including sensitivity, specificity, and area under the receiver operating characteristic curve while assessing effects on interpretation time and diagnostic error rates over a 21-month implementation period at two major academic medical centers. Participants were recruited from radiology departments at the University of Lisbon and University of Porto between September 2019 and June 2021, with comprehensive evaluation of AI-assisted interpretation across chest radiography, computed tomography, magnetic resonance imaging, mammography, and ultrasound modalities encompassing over 12,400 imaging examinations interpreted with and without AI assistance by board-certified radiologists with varying experience levels from residents through senior consultants. Primary outcomes included diagnostic accuracy measured through comparison with reference standards including histopathology for biopsied lesions, clinical follow-up for conservatively managed findings, and expert consensus panel review, interpretation time measured from initial image display to report completion, and diagnostic error rates categorized by type and clinical significance. Results demonstrated that AI-assisted interpretation achieved significantly higher diagnostic accuracy across all modalities evaluated, with CT imaging showing the greatest improvement at 94.6% sensitivity compared to 86.2% for radiologist-only interpretation representing an 8.4 percentage point improvement, followed by MRI at 92.8% versus 85.4% and chest radiography at 91.2% versus 84.8%. Temporal analysis revealed progressive improvement in AI-assisted performance over the implementation period reflecting algorithm refinement and radiologist learning, with accuracy increasing from 82.4% in the first quarter to 94.2% by the final assessment period as radiologists developed proficiency in integrating AI recommendations into established clinical workflows and decision-making processes. Interpretation time decreased by 53% from baseline values of 12.4 minutes to 5.8 minutes per examination by the end of the observation period enabling substantially increased throughput without compromising quality, while diagnostic error rates declined from 8.4% to 3.2% representing a 62% reduction in clinically significant misinterpretations that could adversely affect patient care. AI performance varied by pathology type, with highest detection rates observed for lung nodules at 94.8% and brain lesions at 93.4%, while characterization accuracy was highest for fracture classification at 94.2% reflecting established algorithm strengths. The area under the ROC curve for AI-assisted interpretation exceeded 0.90 for all modalities evaluated, indicating excellent discriminative performance across the full spectrum of radiological applications assessed in this research. These findings support systematic implementation of AI-assisted interpretation in Portuguese radiology departments with potential for substantial improvements in diagnostic accuracy, workflow efficiency, and patient outcomes when appropriately integrated into clinical practice with radiologist oversight maintaining final interpretive responsibility for all examinations.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.402 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.507 Zit.