OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 23.03.2026, 03:58

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Open- and closed-source LLMs in medical and engineering education

2026·0 Zitationen·Frontiers in MedicineOpen Access
Volltext beim Verlag öffnen

0

Zitationen

10

Autoren

2026

Jahr

Abstract

The rapid development of large language models (LLMs), such as the close-source GPT-4, have revolutionized education in assisting students learning. However, open-source LLMs, which have many advantages of accessibility, customization, and transparency, remains under-utilized in both medical and engineering education. The work systematically evaluates the performance of open-source LLMs (DeepSeek, GLM-4, Kimi) and close-source GPT-4 in assisting medical and engineering students learning through diverse question types. We found that DeepSeek outperformed other models for all question types, achieving the highest accuracy rates. To further improve LLM-generated responses, prompt engineering strategies, such as role-playing, generated knowledge prompting, chain-of-thought prompting, few-shot prompting, and output style, were introduced. Post-training evaluations showed significant improvements in model accuracy, with DeepSeek exceeding 95% accuracy for all question types. Among them, Short-answer questions achieved the best response, with the accuracy rate reach up to 97% across four LLMs, indicating the important role of prompt engineering in problem-solving task. The findings highlight the potential of open-source models in supporting medical and engineering education, bridging a critical gap in open-source LLM evaluation and advocating for their wider integration into academic settings.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationText Readability and SimplificationIntelligent Tutoring Systems and Adaptive Learning
Volltext beim Verlag öffnen