OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 19:54

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Efficient Optimization of Multimodal Language Models for Processing Radiological Data using the ROCO Dataset

2025·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

This study investigates the computational efficiency and performance trade-offs of model optimization techniques like QLoRA, Knowledge Distillation (KD), and Pruning, when applied to large language models (LLMs) and small language models (SLMs) for radiology question-answering tasks. Using the ROCOv2 multimodal dataset, we systematically compare baseline models against their fine-tuned and compressed counterparts. The primary goal is to evaluate whether such methods can significantly reduce memory and computational demands while maintaining acceptable accuracy, enabling deployment on edge devices and in low-resource clinical environments. Experimental results show that SLMs enhanced with QLoRA retain competitive accuracy while reducing GPU usage by up to 80%, and that combining KD and Pruning further improves inference speed and hardware efficiency making these models viable for real world radiological decision support at the edge computing devices.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMultimodal Machine Learning ApplicationsTopic Modeling
Volltext beim Verlag öffnen