Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial Intelligence Clinical Reasoning in Board-Style Clinical Vignettes: A Comparative Study
0
Zitationen
5
Autoren
2025
Jahr
Abstract
AIM: This study evaluated the diagnostic accuracy of four large language model (LLM) artificial intelligence (AI) platforms in generating primary and differential diagnoses using United States Medical Licensing Examination (USMLE) Step 1 clinical vignettes. METHODS: Ten USMLE Step 1 clinical vignette questions were selected, and answer choices were removed to simulate open-ended diagnostic reasoning. Each LLM-ChatGPT GPT-4o-mini (OpenAI), Meta AI Llama 4, Google Gemini 2.0 Flash, and Claude Sonnet 4 (Anthropic)-was prompted to provide both a primary diagnosis and a ranked differential diagnosis. Responses were evaluated using a three-point scoring rubric: 2 points for a correct final diagnosis, 1 point for a correct differential diagnosis only, and 0 points for an incorrect or missing diagnosis. The total possible score per model was 20 points. RESULTS: Claude Sonnet 4 achieved the highest accuracy with a total score of 20/20 (100%), followed by Google Gemini at 19/20 (95%), ChatGPT GPT-4o-mini at 17/20 (85%), and Meta AI Llama 4 at 13/20 (65%). All models demonstrated clinically relevant reasoning; however, diagnostic prioritization and accuracy varied by platform. DISCUSSION: The findings indicate that current LLMs possess strong potential as supplemental tools for diagnostic reasoning and medical education. Their ability to generate accurate diagnoses from complex clinical scenarios suggests value for training and clinical decision support. However, variability across platforms highlights the need for cautious implementation. Ethical considerations-including algorithmic bias, overreliance on AI-generated outputs, and patient privacy-must be addressed prior to clinical integration. Future research should incorporate larger and more diverse case sets, include specialty-specific assessments, and establish governance frameworks to guide responsible AI use in medical settings.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.560 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.451 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.948 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.797 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.