Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT and the clinical informatics board examination: the end of unproctored maintenance of certification?
46
Zitationen
4
Autoren
2023
Jahr
Abstract
We aimed to assess ChatGPT's performance on the Clinical Informatics Board Examination and to discuss the implications of large language models (LLMs) for board certification and maintenance. We tested ChatGPT using 260 multiple-choice questions from Mankowitz's Clinical Informatics Board Review book, omitting 6 image-dependent questions. ChatGPT answered 190 (74%) of 254 eligible questions correctly. While performance varied across the Clinical Informatics Core Content Areas, differences were not statistically significant. ChatGPT's performance raises concerns about the potential misuse in medical certification and the validity of knowledge assessment exams. Since ChatGPT is able to answer multiple-choice questions accurately, permitting candidates to use artificial intelligence (AI) systems for exams will compromise the credibility and validity of at-home assessments and undermine public trust. The advent of AI and LLMs threatens to upend existing processes of board certification and maintenance and necessitates new approaches to the evaluation of proficiency in medical education.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.