OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 08.04.2026, 01:32

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Uncover This Tech Term: Large Vision-Language Models in Radiology

2026·0 Zitationen·Korean Journal of RadiologyOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2026

Jahr

Abstract

WHAT ARE LVLMs?Large multimodal models are typically transformer-based foundational models that can process and generate multiple types of data (modalities), including text, images, audio, and video [1,2].Large vision-language models (LVLMs) are a subset of large multimodal models that specifically focus on aligning and integrating visual and linguistic representations.Traditional artificial intelligence (AI) systems are trained to perform well-defined narrow tasks and have limited adaptability.By contrast, LVLMs generalize across diverse tasks and support flexible downstream applications without requiring task-specific retraining.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Radiology practices and educationArtificial Intelligence in Healthcare and EducationCOVID-19 diagnosis using AI
Volltext beim Verlag öffnen