Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Open- and closed-source LLMs in medical and engineering education
0
Zitationen
10
Autoren
2026
Jahr
Abstract
The rapid development of large language models (LLMs), such as the close-source GPT-4, have revolutionized education in assisting students learning. However, open-source LLMs, which have many advantages of accessibility, customization, and transparency, remains under-utilized in both medical and engineering education. The work systematically evaluates the performance of open-source LLMs (DeepSeek, GLM-4, Kimi) and close-source GPT-4 in assisting medical and engineering students learning through diverse question types. We found that DeepSeek outperformed other models for all question types, achieving the highest accuracy rates. To further improve LLM-generated responses, prompt engineering strategies, such as role-playing, generated knowledge prompting, chain-of-thought prompting, few-shot prompting, and output style, were introduced. Post-training evaluations showed significant improvements in model accuracy, with DeepSeek exceeding 95% accuracy for all question types. Among them, Short-answer questions achieved the best response, with the accuracy rate reach up to 97% across four LLMs, indicating the important role of prompt engineering in problem-solving task. The findings highlight the potential of open-source models in supporting medical and engineering education, bridging a critical gap in open-source LLM evaluation and advocating for their wider integration into academic settings.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.291 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.535 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.