Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Uncover This Tech Term: Large Vision-Language Models in Radiology
0
Zitationen
3
Autoren
2026
Jahr
Abstract
WHAT ARE LVLMs?Large multimodal models are typically transformer-based foundational models that can process and generate multiple types of data (modalities), including text, images, audio, and video [1,2].Large vision-language models (LVLMs) are a subset of large multimodal models that specifically focus on aligning and integrating visual and linguistic representations.Traditional artificial intelligence (AI) systems are trained to perform well-defined narrow tasks and have limited adaptability.By contrast, LVLMs generalize across diverse tasks and support flexible downstream applications without requiring task-specific retraining.
Ähnliche Arbeiten
Refinement and reassessment of the SERVQUAL scale.
1991 · 3.967 Zit.
Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review
2005 · 3.782 Zit.
Radiobiology for the Radiologist.
1974 · 3.502 Zit.
International evidence-based recommendations for point-of-care lung ultrasound
2012 · 2.818 Zit.
Radiation Dose Associated With Common Computed Tomography Examinations and the Associated Lifetime Attributable Risk of Cancer
2009 · 2.431 Zit.