OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 20.03.2026, 01:41

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Factors Associated with Accuracy of Large Language Models Artificial Intelligence in Basic Medical Science Examinations: Cross-Sectional Study (Preprint)

2024·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2024

Jahr

Abstract

<sec> <title>BACKGROUND</title> Artificial intelligence (AI) is widely applied across several industries, including medical education. The content validation and its answers are based on training datasets and the optimization of each model. The accuracy of large language models (LLMs) AI in basic medical examinations and the factors related to its accuracy have been explored. </sec> <sec> <title>OBJECTIVE</title> Aimed at evaluating factors associated with the accuracy of large language models (ChatGPT, GPT-4, Google Bard, and Microsoft Bing) in answering multiple-choice questions from basic medical science examinations. </sec> <sec> <title>METHODS</title> We employed questions that were closely aligned with the content and topic distribution of Thailand's Step 1 National Medical Licensing Examination. Variables such as the difficulty index, discrimination index, and question characteristics were collected. These questions were then simultaneously input into ChatGPT, GPT-4, Microsoft Bing, and Google Bard, and their responses were recorded. The accuracy of these LLMs and their association factors were analyzed using multivariable logistic regression. This analysis aimed to assess the effect of various factors on model accuracy, with results reported as Odds ratio (OR). </sec> <sec> <title>RESULTS</title> The study revealed GPT-4 as the top-performing model with an overall accuracy of 89.07% (95% CI 84.76 - 92.41), significantly outperforming the others (p &lt; 0.001). Microsoft Bing followed with an accuracy of 83.69% (95% CI 78.85 - 87.80), ChatGPT at 67.02% (95% 61.20 - 72.48), and Google Bard at 63.83% (95% CI 57.92 - 69.44). The multivariable logistic regression showed a correlation between question difficulty and model performance, with GPT-4 demonstrating the strongest association. Interestingly, no significant correlation was found between model accuracy and question length, negative wording, clinical scenarios, or the discrimination index for most models, except for Google Bard, which showed varying correlations. </sec> <sec> <title>CONCLUSIONS</title> The GPT-4 and Microsoft Bing models demonstrated equal and superior accuracy compared to ChatGPT and Google Bard in the domain of basic medical science. The accuracy of these models is significantly influenced by the item's difficulty index (p). Indicating that the LLMs have more accuracy on easier questions. This suggests that the more accurate models, such as GPT-4 and Bing, can be valuable tools for understanding and learning basic medical science concepts. </sec>

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationRadiomics and Machine Learning in Medical ImagingRadiology practices and education
Volltext beim Verlag öffnen