OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 20.03.2026, 05:21

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

GPT-4V Cannot Generate Radiology Reports Yet

2025·3 ZitationenOpen Access
Volltext beim Verlag öffnen

3

Zitationen

5

Autoren

2025

Jahr

Abstract

GPT-4's purported strong multimodal abilities raise interests in using it to automate radiology report writing, but there lacks thorough evaluations.In this work, we perform a systematic evaluation of in generating radiology reports across three chest X-ray report benchmarks: MIMIC-CXR, CheXpert Plus, and IU X-Ray.We attempt to directly generate reports with different prompting strategies and find that the models fail terribly in both lexical metrics and clinical efficacy metrics.To understand the low performance, we decompose the task into two steps: 1) the medical image reasoning step of predicting medical condition labels from images; and 2) the report synthesis step of generating reports from (groundtruth) conditions.We show that GPT-4's performance in image reasoning is consistently low across different prompts.In fact, the distributions of modelpredicted labels remain constant regardless of which groundtruth conditions are present on the image, suggesting that the model is not interpreting chest X-rays meaningfully.Even when given groundtruth conditions in report synthesis, its generated reports are less correct and less natural-sounding than a finetuned Llama.Altogether, our findings cast doubt on the viability of using GPT-4 in a radiology workflow.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Radiomics and Machine Learning in Medical ImagingRadiology practices and educationArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen