OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 08.04.2026, 13:27

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Large Language Models Evaluation in Answering Multiple Choice Questions in Biochemistry Course (Preprint)

2024·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2024

Jahr

Abstract

<sec> <title>BACKGROUND</title> Recent advancements in artificial intelligence (AI), particularly in large language models (LLMs), have started a new era of innovation across various fields, with medicine at the forefront of this technological revolution. Many studies indicated that at the current level of development, LLMs can pass different board exams. However, the ability to answer specific subject-related questions requires validation. </sec> <sec> <title>OBJECTIVE</title> The objective of this study was to conduct a comprehensive analysis comparing the performance of advanced LLM chatbots - Claude (Anthropic), GPT-4 (OpenAI), Gemini (Google), and Copilot (Microsoft), against the academic results of medical students in the medical biochemistry course. </sec> <sec> <title>METHODS</title> We used 200 USMLE-style multiple-choice questions selected from the course exam database. They encompassed various complexity levels and were distributed across 23 distinctive topics. The questions with tables and images were not included in the study. The results of 5 successive attempts by Claude 3.5 Sonnet, GPT-4-1106, Gemini 1.5 Flash, and Copilot to answer this questionnaire set were evaluated based on accuracy in August 2024. Statistica 13.5.0.17 (TIBC® Statistica™) was used to analyze the data's basic statistics. Considering the binary nature of the data, the Chi-square test was utilized to compare results among the different chatbots, with a statistical significance level of P&lt;.05. </sec> <sec> <title>RESULTS</title> On average, the selected chatbots correctly answered 81.1±12.8% of the questions, surpassing the students' performance by 8.3% (P=.017). In this study, Claude showed the best performance in biochemistry MCQs, correctly answering 92.5% of questions, followed by GPT-4 (85.1%), Gemini (78.5%), and Copilot (64%). The chatbots demonstrated the best results in the following four topics: Eicosanoids (100%), Bioenergetics and Electron transport chain (96.4±7.2), Hexose monophosphate pathway (91.7±16.7), and Ketone bodies (93.8±12.5). The Pearson Chi-square test indicated a statistically significant association between the answers of all 4 chatbots (P&lt;.001- P&lt;.044). </sec> <sec> <title>CONCLUSIONS</title> Our study suggests that different AI models may have unique strengths in specific medical fields, which could be leveraged for targeted educational support in biochemistry courses. This performance highlights the potential of AI in medical education and assessment. </sec>

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen