Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Performance of multimodal large language models in the Japanese surgical specialist examination
1
Zitationen
5
Autoren
2025
Jahr
Abstract
BACKGROUND: Multimodal large language models (LLMs) have the capability to process and integrate both text and image data, offering promising applications in the medical field. This study aimed to evaluate the performance of representative multimodal LLMs in the 2023 Japanese Surgical Specialist Examination, with a focus on image-based questions across various surgical subspecialties. METHODS: A total of 98 examination questions, including 43 image-based questions, from the 2023 Japanese Surgical Specialist Examination were administered to three multimodal LLMs: GPT-4 Omni, Claude 3.5 Sonnet, and Gemini Pro 1.5. Each model's performance was assessed under two conditions: with and without images. Statistical analysis was conducted using McNemar's test to evaluate the significance of accuracy differences between the two conditions. RESULTS: Among the three LLMs, Claude 3.5 Sonnet achieved the highest overall accuracy at 84.69%, exceeding the passing threshold of 80%, which is consistent with the standard set by the Japan Surgical Society for board certification. GPT-4 Omni closely approached the threshold with an accuracy of 79.59%, while Gemini Pro 1.5 scored 61.22%. Claude 3.5 Sonnet demonstrated the highest accuracy in four of six subspecialties for image-based questions and was the only model to show a statistically significant improvement with image inclusion (76.74% with images vs. 62.79% without images, p = 0.041). By contrast, GPT-4 Omni and Gemini Pro 1.5 did not exhibit significant performance changes with image inclusion. CONCLUSION: Claude 3.5 Sonnet outperformed the other models in most surgical subspecialties for image-based questions and was the only model to benefit significantly from image inclusion. These findings suggest that multimodal LLMs, particularly Claude 3.5 Sonnet, hold promise as diagnostic and educational support tools in surgical domains, and that variation in visual reasoning capabilities may account for model-level differences in image-based performance.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.561 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.452 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.948 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.797 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.