Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Reality Check of Vision-Language Pre-training in Radiology: Have We Progressed Using Text?
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Vision-language pre-training has recently gained popularity as it allows learning rich feature representations using large-scale data sources. This paradigm has quickly made its way into the medical image analysis community. In particular, there is an impressive amount of recent literature developing vision-language models for radiology. However, the available medical datasets with image-text supervision are scarce, and medical concepts are fine-grained, involving expert knowledge that existing vision-language models struggle to encode. In this paper, we propose to take a prudent step back from the literature and revisit supervised, unimodal pre-training, using fine-grained labels instead. We conduct an extensive comparison demonstrating that unimodal pre-training is highly competitive and better suited to integrating heterogeneous data sources. Our results also question the potential of recent vision-language models for open-vocabulary generalization, which have been evaluated using optimistic experimental settings. Finally, we study novel alternatives to better integrate fine-grained labels and noisy text supervision.
Ähnliche Arbeiten
MizAR 60 for Mizar 50
2023 · 74.099 Zit.
ImageNet: A large-scale hierarchical image database
2009 · 60.446 Zit.
Microsoft COCO: Common Objects in Context
2014 · 41.095 Zit.
Fully convolutional networks for semantic segmentation
2015 · 36.279 Zit.
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.299 Zit.