Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Validity of Generative Artificial Intelligence in Evaluating Medical Students in Objective Structured Clinical Examination: Experimental Study (Preprint)
0
Zitationen
7
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> The Objective Structured Clinical Examination (OSCE) has been widely used to evaluate students in medical education. However, it is resource-intensive, presenting challenges in implementation. We hypothesized that generative artificial intelligence (AI) such as ChatGPT-4 could serve as a complementary assessor and alleviate the burden of physicians in evaluating OSCE. </sec> <sec> <title>OBJECTIVE</title> By comparing the evaluation scores between generative AI and physicians, this study aims to evaluate the validity of generative AI as a complementary assessor for OSCE. </sec> <sec> <title>METHODS</title> This experimental study was conducted at a medical university in Japan. We recruited 11 fifth-year medical students during the general internal medicine clerkship from April 2023 to December 2023. Participants conducted a mock medical interview with a patient experiencing abdominal pain and wrote patient notes. Four physicians independently evaluated the participants by reviewing medical interview videos and patient notes, while ChatGPT-4 was provided with interview transcripts and notes. Evaluations were conducted using the 6-domain rubric (patient care and communication, history taking, physical examination, patient notes, clinical reasoning, and management). Each domain was scored using a 6-point Likert scale, ranging from 1 (very poor) to 6 (excellent). Median scores were compared using the Wilcoxon signed-rank test, and the agreement between ChatGPT-4 and physicians was assessed using intraclass correlation coefficients (ICCs). All <i>P</i> values &lt;.05 were considered statistically significant. </sec> <sec> <title>RESULTS</title> Although ChatGPT-4 assigned higher scores than physicians in terms of physical examination (median 4.0, IQR 4.0-5.0 vs median 4.0, IQR 3.0-4.0; <i>P</i>=.02), patient notes (median 6.0, IQR 5.0-6.0 vs median 4.0, IQR 4.0-4.0; <i>P</i>=.002), clinical reasoning (median 5.0, IQR 5.0-5.0 vs median 4.0, IQR 3.0-4.0; <i>P</i>&lt;.001), and management (median 6.0, IQR 5.0-6.0 vs median 4.0, IQR 2.5-4.5; <i>P</i>=.002), there were no significant differences in the scores of patient care and communication (median 5.0, IQR 5.0-5.0 vs median 5.0, IQR 4.0-5.0; <i>P</i>=.06) and history taking (median 5.0, IQR 4.0-5.0 vs median 5.0, IQR 4.0-5.0; <i>P</i>&gt;.99), respectively. ICC values were low in all domains, except for history taking, where the agreement was still poor (ICC=0.36, 95% CI –0.32 to 0.78). </sec> <sec> <title>CONCLUSIONS</title> ChatGPT-4 produced higher evaluation scores than physicians in several OSCE domains, though the agreement between them was poor. Although these preliminary results suggest that generative AI may be able to support assessment in some domains of OSCE, further research is needed to establish its reproducibility and validity. Generative AI like ChatGPT-4 shows potential as a complementary assessor for OSCE. </sec> <sec> <title>CLINICALTRIAL</title> University Hospital Medical Information Network Clinical Trials Registry UMIN000050489; https://center6.umin.ac.jp/cgi-open-bin/ctr/ctr_his_list.cgi?recptno=R000057513 </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.