OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 08.04.2026, 21:42

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Visual recognition limitations in multimodal large language models: A comparative analysis of histological image interpretation

2026·0 Zitationen·PLOS Digital HealthOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2026

Jahr

Abstract

Multimodal large language models (LLMs) with image recognition capabilities have emerged as potential tools for medical image analysis, yet their performance in specialized domains like histology remains largely unexplored. The objective of this study was to systematically evaluate the performance of leading multimodal LLMs in histological image interpretation and assess their visual recognition capabilities. Four multimodal LLMs (GPT-4o, Claude Sonnet 4, Gemini 2.5 Flash, and Copilot) were evaluated using 144 histological images representing four tissue types (epithelial, connective, muscle, and nervous) at three magnification levels. Each image was assessed using three standardized questions: tissue identification, morphological features, and functional analysis. Three expert faculty members independently graded responses using a 4-point scale (1 = Poor to 4 = Excellent). Friedman tests, ICC, and post-hoc power analyses were performed with statistical significance set at p < .05. A clear performance hierarchy emerged with Gemini demonstrating superior performance (mean score: 3.35/4.00), significantly outperforming all other models. Copilot and GPT-4o tied for second place (both 2.76/4.00), while Claude showed the lowest performance (2.55/4.00). Performance varied across tissue types, with epithelial tissue showing the greatest inter-model variation. Inter-rater reliability was good across all models (ICC > 0.85), confirming assessment consistency. Post-hoc power analysis validated statistical significance for primary comparisons but indicated insufficient power to distinguish between the three lower-performing models. Current multimodal LLMs exhibit significant limitations in visual recognition relative to text processing performance. The substantial cross-modal performance gaps reveal some constraints in visual processing architectures, though the underlying mechanisms require further investigation. These findings establish technical benchmarks for multimodal LLM development and highlight the need for specialized visual processing innovations in their imaging processes.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

AI in cancer detectionArtificial Intelligence in Healthcare and EducationRadiomics and Machine Learning in Medical Imaging
Volltext beim Verlag öffnen