Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Mapping Applications and Outcomes of Large-Language-Model-Generated Cases in Health Professions Education: A Scoping Review.
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Objective: Large language models (LLMs) have rapidly permeated health professions education and are increasingly used to generate clinical cases and vignettes, yet their characteristics, evaluation methods, and educational impact remain unclear. To map how LLMs are used to generate cases in health professions education and to summarize reported case characteristics, evaluation approaches, bias, and educational outcomes. Methods: We conducted a scoping review following Arksey and O’Malley’s framework and reported using PRISMA-ScR. PubMed, Web of Science, and Scopus were searched on 27 August 2025. Of 2023 records, 72 full texts were assessed and 23 studies met inclusion criteria. Data were charted with a structured extraction form. Results: Across the 23 studies, 33 distinct LLMs were used, most commonly GPT-based models (54.5%). Cases were mainly text-based (69%), with additional image- (20.7%) and audio-based (10.3%) formats across 23 clinical and educational domains. Prompts were reported in 65.2% of studies, and 60.9% included a formal quality evaluation, ranging from high quality to clearly problematic examples. Seven studies (30.4%) identified bias or discriminatory patterns. Student participation occurred in 39.1% of studies, but no higher-level educational outcomes such as behavior change or long-term performance were reported. Conclusions: LLM-generated cases appear feasible and versatile across health professions education but are supported by early, methodologically heterogeneous evidence. Future research should standardize quality evaluation, rigorously assess learning and behavioral outcomes, and systematically audit bias in generated content.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.287 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.140 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.534 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.450 Zit.