OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 26.04.2026, 01:25

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Direct comparison of GPT-4 and human physicians in MKSAP-19 multiple-choice questions

2024·0 Zitationen·Canadian Journal of General Internal MedicineOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2024

Jahr

Abstract

Several studies have compared scores of artificial intelligence (AI) algorithms on medical multiple-choice questions (MCQs) with reference standards. In this study, the authors directly compared scores of an AI algorithm (Generative Pre-trained Transformer 4 [GPT-4]) with those of clinicians. A stratified random sample of 600 Medical Knowledge Self-Assessment Program-19 (MKSAP) MCQs were inputted into GPT-4. The proportion of questions answered correctly was compared with the answer selected by the majority of the MKSAP clinician testing group (consensus clinicians) and the proportion of the MKSAP clinician testing group who selected the correct answer (average clinician). GPT-4 answered 496 questions correctly (82.7%, 95% CI 79.6 to 85.7). This was significantly less than the consensus clinician (88.0%, 95% CI 85.4 to 90.6; McNemar's T statistic 10.0, P = 0.0015) but was significantly greater than the average clinician (64.7%, 95% CI 63.1 to 66.3; paired T statistic = 12.7, P < .0001). Results did not significantly vary by specialty. GPT-4 scored significantly lower than the consensus clinicians, but significantly greater than the average clinician, on MKSAP MCQs.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationClinical Reasoning and Diagnostic SkillsRadiomics and Machine Learning in Medical Imaging
Volltext beim Verlag öffnen