OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 23:34

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Transfer Learning with Large Language Models for Medical Imaging with Limited Data: Performance Comparison of Fine-Tuning Techniques

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2025

Jahr

Abstract

Medical imaging applications most of the recent research suffers from a scarcity of labeled data, which is a challenging task for training deep-learning models from the scratch. It also evolves from recent development in large language models (LLMs) including vision-language transformers such as CLIP and BioGPTs variants, and now extensive opportunities can be explored in transfer learning in this field. This study evaluates the feasibility of applying transfer learning from pre-trained LLMs to various data-constrained medical imaging classification tasks. Full fine-tuning, feature extraction, quick tuning, and parameter-efficient approaches like LoRA (Low-Rank Adaptation) and adapters are some of the fine-tuning methodologies that we compare and contrast. Even in low-resource settings, experiments were conducted on cutting-edge medical imaging benchmarks such as ChestX-ray14 and COVID-19 datasets. Accuracy, F1-score, and AUC-ROC are the measures that we employ for evaluation. The results show that parameter-efficient fine-tuning methods can yield competitive performance at much lower computational overhead, enabling a practical accuracy-efficiency trade-off. This work sheds light on the optimization of LLMs for medical imaging and also signals the way for further development of adaptability in data-poor healthcare settings.

Ähnliche Arbeiten