Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Performance of <scp>ChatGPT</scp> in French language Parcours d'Accès Spécifique Santé test and in <scp>OBGYN</scp>
26
Zitationen
5
Autoren
2023
Jahr
Abstract
OBJECTIVES: To evaluate the performance of ChatGPT in a French medical school entrance examination. METHODS: A cross-sectional study using a consecutive sample of text-based multiple-choice practice questions for the Parcours d'Accès Spécifique Santé. ChatGPT answered questions in French. We compared performance of ChatGPT in obstetrics and gynecology (OBGYN) and in the whole test. RESULTS: Overall, 885 questions were evaluated. The mean test score was 34.0% (306; maximal score of 900). The performance of ChatGPT was 33.0% (292 correct answers, 885 questions). The performance of ChatGPT was lower in biostatistics (13.3% ± 19.7%) than in anatomy (34.2% ± 17.9%; P = 0.037) and also lower than in histology and embryology (40.0% ± 18.5%; P = 0.004). The OBGYN part had 290 questions. There was no difference in the test scores and the performance of ChatGPT in OBGYN versus the whole entrance test (P = 0.76 vs P = 0.10, respectively). CONCLUSIONS: ChatGPT answered one-third of questions correctly in the French test preparation. The performance in OBGYN was similar.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.549 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.443 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.941 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.792 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.