OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.03.2026, 23:07

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Large Language Models And Examination Performance In Healthcare Education: A Bibliometric Analysis (Preprint)

2026·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2026

Jahr

Abstract

<sec> <title>BACKGROUND</title> Large language models (LLMs) are increasingly used and evaluated in health professions education, including studies assessing model performance on healthcare examination questions. The rapid growth and heterogeneity of this literature make it difficult to track research concentration, collaboration patterns, and emerging themes. </sec> <sec> <title>OBJECTIVE</title> To map publication trends, key contributors, collaboration networks, and thematic hotspots in research on LLM-supported exam solving in healthcare education. </sec> <sec> <title>METHODS</title> We conducted a bibliometric analysis of publications from 2023–2025. Searches were performed in PubMed, Scopus, CINAHL Ultimate (EBSCOhost), and Web of Science using structured terms for AI/LLMs (eg, ChatGPT, generative AI, large language models) combined with healthcare education and training concepts. Eligible studies addressed AI-based technologies within healthcare education or training contexts; studies focused solely on clinical practice or non-educational applications were excluded. Bibliographic metadata from PubMed (TXT) and Scopus (BIB) were merged and analyzed using bibliometrix/Biblioshiny (R) and VOSviewer to quantify productivity, collaboration (including international co-authorship), and keyword co-occurrence patterns. </sec> <sec> <title>RESULTS</title> The dataset comprised 262 documents from 158 sources, with an annual publication growth rate of 36.58% and a mean document age of 1.83 years. A total of 1,351 authors contributed (mean 5.97 co-authors per document); international co-authored publications accounted for 13.36%. Most records were journal articles (253/262), followed by letters (8/262) and one conference paper. Annual output rose from 52 (2023) to 113 (2024; +117.3%), then decreased to 97 (2025; −14.2% vs 2024) while remaining above 2023 levels. JMIR Medical Education published the most articles on this topic (34/262), followed by Scientific Reports (9/262) and BMC Medical Education (7/262). Frequent keywords included “humans” (n=144), “artificial intelligence” (n=82), “generative AI” (n=30), and “large language models” (n=20); education-focused terms such as “educational measurement/methods” were also prominent (n=76). </sec> <sec> <title>CONCLUSIONS</title> Research on LLMs and exam performance in healthcare education expanded rapidly from 2023–2025, with publication activity concentrated in a limited set of journals and relatively low international collaboration. Thematic patterns emphasize assessment-related outcomes and LLM/ChatGPT performance, supporting the need for more comparable, transparent reporting (eg, prompts and model versions) and education-centered outcomes beyond accuracy in future studies. </sec> <sec> <title>CLINICALTRIAL</title> / </sec>

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationDiversity and Career in MedicineSocial Media in Health Education
Volltext beim Verlag öffnen