OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 04:18

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

VisualCheXbert

2021·23 ZitationenOpen Access
Volltext beim Verlag öffnen

23

Zitationen

10

Autoren

2021

Jahr

Abstract

Automatic extraction of medical conditions from free-text radiology reports\nis critical for supervising computer vision models to interpret medical images.\nIn this work, we show that radiologists labeling reports significantly disagree\nwith radiologists labeling corresponding chest X-ray images, which reduces the\nquality of report labels as proxies for image labels. We develop and evaluate\nmethods to produce labels from radiology reports that have better agreement\nwith radiologists labeling images. Our best performing method, called\nVisualCheXbert, uses a biomedically-pretrained BERT model to directly map from\na radiology report to the image labels, with a supervisory signal determined by\na computer vision model trained to detect medical conditions from chest X-ray\nimages. We find that VisualCheXbert outperforms an approach using an existing\nradiology report labeler by an average F1 score of 0.14 (95% CI 0.12, 0.17). We\nalso find that VisualCheXbert better agrees with radiologists labeling chest\nX-ray images than do radiologists labeling the corresponding radiology reports\nby an average F1 score across several medical conditions of between 0.12 (95%\nCI 0.09, 0.15) and 0.21 (95% CI 0.18, 0.24).\n

Ähnliche Arbeiten