OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 04.05.2026, 19:24

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Artificial Intelligence Clinical Reasoning in Board-Style Clinical Vignettes: A Comparative Study

2025·0 Zitationen·CureusOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2025

Jahr

Abstract

AIM: This study evaluated the diagnostic accuracy of four large language model (LLM) artificial intelligence (AI) platforms in generating primary and differential diagnoses using United States Medical Licensing Examination (USMLE) Step 1 clinical vignettes. METHODS: Ten USMLE Step 1 clinical vignette questions were selected, and answer choices were removed to simulate open-ended diagnostic reasoning. Each LLM-ChatGPT GPT-4o-mini (OpenAI), Meta AI Llama 4, Google Gemini 2.0 Flash, and Claude Sonnet 4 (Anthropic)-was prompted to provide both a primary diagnosis and a ranked differential diagnosis. Responses were evaluated using a three-point scoring rubric: 2 points for a correct final diagnosis, 1 point for a correct differential diagnosis only, and 0 points for an incorrect or missing diagnosis. The total possible score per model was 20 points. RESULTS: Claude Sonnet 4 achieved the highest accuracy with a total score of 20/20 (100%), followed by Google Gemini at 19/20 (95%), ChatGPT GPT-4o-mini at 17/20 (85%), and Meta AI Llama 4 at 13/20 (65%). All models demonstrated clinically relevant reasoning; however, diagnostic prioritization and accuracy varied by platform. DISCUSSION: The findings indicate that current LLMs possess strong potential as supplemental tools for diagnostic reasoning and medical education. Their ability to generate accurate diagnoses from complex clinical scenarios suggests value for training and clinical decision support. However, variability across platforms highlights the need for cautious implementation. Ethical considerations-including algorithmic bias, overreliance on AI-generated outputs, and patient privacy-must be addressed prior to clinical integration. Future research should incorporate larger and more diverse case sets, include specialty-specific assessments, and establish governance frameworks to guide responsible AI use in medical settings.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationClinical Reasoning and Diagnostic SkillsMachine Learning in Healthcare
Volltext beim Verlag öffnen