OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.04.2026, 12:42

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Performance of ChatGPT-4, Gemini, and DeepSeek-V3 on answering the multiple choice questions from Taiwan national dental technician licensing examinations and their self-learning abilities over a three-week period

2025·1 Zitationen·Journal of Dental SciencesOpen Access
Volltext beim Verlag öffnen

1

Zitationen

4

Autoren

2025

Jahr

Abstract

Background/purpose: Large language models (LLMs) can help the students to learn specific dental subjects and thus can be used as educational support tools for dental students. This study evaluated whether LLMs could correctly answer the multiple-choice questions (MCQs) selected from the 2023 Taiwan national dental technician licensing examination (TNDTLE) and whether the LLMs had the self-learning ability to improve their performance on correctly answering the exam questions over a three-week period. Materials and methods: Three different LLMs, ChatGPT-4, Gemini, and DeepSeek-V3, were used to answer the 194 text-based MCQs selected from the 2023 TNDTLE and the initial accuracy rates (ARs) were recorded. The same process was performed one, two, and three weeks later and the subsequent ARs were also recorded. The initial and the subsequent overall ARs were compared to assess whether the three LLMs had the self-learning ability over time. Results: The initial overall ARs for ChatGPT-4, Gemini, and DeepSeek-V3 were 52.1 %, 57.2 %, and 69.6 %, respectively, indicating that DeepSeek-V3 outperforms ChatGPT-4 and Gemini. However, Gemini showed significant improvement in performance one week and three weeks later, but the ChatGPT-4 and DeepSeek-V3 showed no significant improvement in performance over time. Among the 9 different subjects of dental technology, Gemini showed notable progress in several subjects, ChatGPT-4 showed limited improvement, and DeepSeek-V3 remained stable overall. Conclusion: Without external prompts, Gemini demonstrates self-learning potential. DeepSeek-V3 shows stable performance but limited learning ability, while ChatGPT-4 exhibits minimal learning. For the improvement in self-learning ability over time, Gemini outperforms ChatGPT-4 and DeepSeek-V3.

Ähnliche Arbeiten