Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Decoupling Visual Parsing and Diagnostic Reasoning for Vision–Language Models (GPT-4o and GPT-5): Analysis Using Thoracic Imaging Quiz Cases
2
Zitationen
5
Autoren
2025
Jahr
Abstract
<b>Background:</b> Vision-language models (VLMs) have potential to identify findings on radiologic imaging (i.e., visual parsing) and translate findings into diagnoses (i.e., diagnostic reasoning). Current VLMs have shown insufficient performance to support clinical integration. <b>Objective:</b> To evaluate the separate contributions of visual parsing and diagnostic reasoning toward GPT-based VLMs' performance in generating correct diagnoses for thoracic imaging. <b>Methods:</b> This retrospective study included 128 publicly available thoracic imaging cases from the Korean Society of Thoracic Imaging quiz platform (accessed on June 15, 2025). Two VLMs (GPT-4o and GPT-5) processed cases, separately when inputted patient metadata and images and when inputted patient metadata and radiologist-generated image descriptions. The models provided five ranked differential diagnoses for each case; when inputted metadata and images, the models first provided a summary of imaging findings. The proportion of cases for which models' five differential diagnoses included the correct diagnosis was determined (i.e., top-5 accuracy). Performance of quiz participants, who interpreted cases using metadata and images, was extracted from the platform. Quality of model-provided image summaries was scored on a 4-point scale (4=best score). Logistic regression analyses assessed associations between model image summary scores and diagnostic performance. Diagnostic concordance was assessed between models' top-ranked diagnoses and quiz participants' top-ten differential diagnoses. <b>Results:</b> Top-5 accuracy for GPT-4o and GPT-5 when inputted metadata and images was 15.9% and 24.7% and when inputted metadata and descriptions was 40.1% and 59.1%, respectively; quiz participants' pooled top-5 accuracy was 45.8%. Median image summary score was 2 for both models; these scores showed significant independent associations with a top-5 match (GPT-4o, OR=5.95; GPT-5, OR=2.77; P<.001). Concordance between models' top-ranked diagnosis and quiz participants' differential lists for GPT-4o and GPT-5 when inputted metadata and images was 31.6% and 39.3% and when inputted metadata and descriptions was 78.8% and 79.4%, respectively. <b>Conclusions:</b> Two VLMs showed limited ability to visually identify thoracic imaging findings although performed more favorably in generating accurate diagnoses when provided radiologist-generated descriptions. <b>Clinical Impact:</b> The results underscore the need for radiologist expertise in thoracic imaging interpretation and identify visual image parsing rather than diagnostic reasoning as the principal limitation constraining VLM performance.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.