Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Direct comparison of GPT-4 and human physicians in MKSAP-19 multiple-choice questions
0
Zitationen
3
Autoren
2024
Jahr
Abstract
Several studies have compared scores of artificial intelligence (AI) algorithms on medical multiple-choice questions (MCQs) with reference standards. In this study, the authors directly compared scores of an AI algorithm (Generative Pre-trained Transformer 4 [GPT-4]) with those of clinicians. A stratified random sample of 600 Medical Knowledge Self-Assessment Program-19 (MKSAP) MCQs were inputted into GPT-4. The proportion of questions answered correctly was compared with the answer selected by the majority of the MKSAP clinician testing group (consensus clinicians) and the proportion of the MKSAP clinician testing group who selected the correct answer (average clinician). GPT-4 answered 496 questions correctly (82.7%, 95% CI 79.6 to 85.7). This was significantly less than the consensus clinician (88.0%, 95% CI 85.4 to 90.6; McNemar's T statistic 10.0, P = 0.0015) but was significantly greater than the average clinician (64.7%, 95% CI 63.1 to 66.3; paired T statistic = 12.7, P < .0001). Results did not significantly vary by specialty. GPT-4 scored significantly lower than the consensus clinicians, but significantly greater than the average clinician, on MKSAP MCQs.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.521 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.412 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.891 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.575 Zit.